00:00:00.001 Started by upstream project "autotest-per-patch" build number 132525 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:07.253 The recommended git tool is: git 00:00:07.253 using credential 00000000-0000-0000-0000-000000000002 00:00:07.255 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:07.266 Fetching changes from the remote Git repository 00:00:07.270 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:07.282 Using shallow fetch with depth 1 00:00:07.282 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:07.282 > git --version # timeout=10 00:00:07.294 > git --version # 'git version 2.39.2' 00:00:07.295 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.308 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.308 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:13.268 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:13.282 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:13.295 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:13.295 > git config core.sparsecheckout # timeout=10 00:00:13.308 > git read-tree -mu HEAD # timeout=10 00:00:13.328 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:13.352 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:13.353 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:13.439 [Pipeline] Start of Pipeline 00:00:13.453 [Pipeline] library 00:00:13.455 Loading library shm_lib@master 00:00:13.455 Library shm_lib@master is cached. Copying from home. 00:00:13.477 [Pipeline] node 00:00:13.485 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:13.489 [Pipeline] { 00:00:13.500 [Pipeline] catchError 00:00:13.503 [Pipeline] { 00:00:13.516 [Pipeline] wrap 00:00:13.524 [Pipeline] { 00:00:13.531 [Pipeline] stage 00:00:13.533 [Pipeline] { (Prologue) 00:00:13.727 [Pipeline] sh 00:00:14.013 + logger -p user.info -t JENKINS-CI 00:00:14.034 [Pipeline] echo 00:00:14.035 Node: GP6 00:00:14.045 [Pipeline] sh 00:00:14.352 [Pipeline] setCustomBuildProperty 00:00:14.366 [Pipeline] echo 00:00:14.368 Cleanup processes 00:00:14.374 [Pipeline] sh 00:00:14.659 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:14.659 408280 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:14.675 [Pipeline] sh 00:00:14.960 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:14.960 ++ awk '{print $1}' 00:00:14.960 ++ grep -v 'sudo pgrep' 00:00:14.960 + sudo kill -9 00:00:14.960 + true 00:00:14.976 [Pipeline] cleanWs 00:00:14.987 [WS-CLEANUP] Deleting project workspace... 00:00:14.987 [WS-CLEANUP] Deferred wipeout is used... 00:00:14.994 [WS-CLEANUP] done 00:00:14.999 [Pipeline] setCustomBuildProperty 00:00:15.015 [Pipeline] sh 00:00:15.301 + sudo git config --global --replace-all safe.directory '*' 00:00:15.404 [Pipeline] httpRequest 00:00:16.069 [Pipeline] echo 00:00:16.070 Sorcerer 10.211.164.101 is alive 00:00:16.077 [Pipeline] retry 00:00:16.078 [Pipeline] { 00:00:16.088 [Pipeline] httpRequest 00:00:16.091 HttpMethod: GET 00:00:16.092 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.092 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.099 Response Code: HTTP/1.1 200 OK 00:00:16.099 Success: Status code 200 is in the accepted range: 200,404 00:00:16.099 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:23.943 [Pipeline] } 00:00:23.963 [Pipeline] // retry 00:00:23.971 [Pipeline] sh 00:00:24.258 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.275 [Pipeline] httpRequest 00:00:24.678 [Pipeline] echo 00:00:24.681 Sorcerer 10.211.164.101 is alive 00:00:24.693 [Pipeline] retry 00:00:24.695 [Pipeline] { 00:00:24.713 [Pipeline] httpRequest 00:00:24.718 HttpMethod: GET 00:00:24.718 URL: http://10.211.164.101/packages/spdk_3c5c3d590e0a54873b227b25579748bea5e847b6.tar.gz 00:00:24.718 Sending request to url: http://10.211.164.101/packages/spdk_3c5c3d590e0a54873b227b25579748bea5e847b6.tar.gz 00:00:24.725 Response Code: HTTP/1.1 200 OK 00:00:24.726 Success: Status code 200 is in the accepted range: 200,404 00:00:24.726 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3c5c3d590e0a54873b227b25579748bea5e847b6.tar.gz 00:06:49.566 [Pipeline] } 00:06:49.624 [Pipeline] // retry 00:06:49.629 [Pipeline] sh 00:06:49.934 + tar --no-same-owner -xf spdk_3c5c3d590e0a54873b227b25579748bea5e847b6.tar.gz 00:06:52.492 [Pipeline] sh 00:06:52.778 + git -C spdk log --oneline -n5 00:06:52.778 3c5c3d590 bdevperf: Remove TAILQ_REMOVE which may result in potential memory leak 00:06:52.778 f5304d661 bdev/malloc: Fix unexpected DIF verification error for initial read 00:06:52.778 baa2dd0a5 dif: Set DIF field to 0 explicitly if its check is disabled 00:06:52.778 a91d250fa bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata 00:06:52.778 ff173863b ut/bdev: Remove duplication with many stups among unit test files 00:06:52.789 [Pipeline] } 00:06:52.802 [Pipeline] // stage 00:06:52.810 [Pipeline] stage 00:06:52.812 [Pipeline] { (Prepare) 00:06:52.830 [Pipeline] writeFile 00:06:52.848 [Pipeline] sh 00:06:53.138 + logger -p user.info -t JENKINS-CI 00:06:53.152 [Pipeline] sh 00:06:53.441 + logger -p user.info -t JENKINS-CI 00:06:53.454 [Pipeline] sh 00:06:53.743 + cat autorun-spdk.conf 00:06:53.743 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:53.743 SPDK_TEST_NVMF=1 00:06:53.743 SPDK_TEST_NVME_CLI=1 00:06:53.743 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:53.743 SPDK_TEST_NVMF_NICS=e810 00:06:53.743 SPDK_TEST_VFIOUSER=1 00:06:53.743 SPDK_RUN_UBSAN=1 00:06:53.743 NET_TYPE=phy 00:06:53.750 RUN_NIGHTLY=0 00:06:53.755 [Pipeline] readFile 00:06:53.785 [Pipeline] withEnv 00:06:53.787 [Pipeline] { 00:06:53.801 [Pipeline] sh 00:06:54.087 + set -ex 00:06:54.087 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:06:54.087 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:54.087 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:54.087 ++ SPDK_TEST_NVMF=1 00:06:54.087 ++ SPDK_TEST_NVME_CLI=1 00:06:54.087 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:54.087 ++ SPDK_TEST_NVMF_NICS=e810 00:06:54.087 ++ SPDK_TEST_VFIOUSER=1 00:06:54.087 ++ SPDK_RUN_UBSAN=1 00:06:54.087 ++ NET_TYPE=phy 00:06:54.087 ++ RUN_NIGHTLY=0 00:06:54.087 + case $SPDK_TEST_NVMF_NICS in 00:06:54.087 + DRIVERS=ice 00:06:54.087 + [[ tcp == \r\d\m\a ]] 00:06:54.087 + [[ -n ice ]] 00:06:54.087 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:06:54.087 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:06:57.380 rmmod: ERROR: Module irdma is not currently loaded 00:06:57.380 rmmod: ERROR: Module i40iw is not currently loaded 00:06:57.380 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:06:57.380 + true 00:06:57.380 + for D in $DRIVERS 00:06:57.380 + sudo modprobe ice 00:06:57.380 + exit 0 00:06:57.389 [Pipeline] } 00:06:57.406 [Pipeline] // withEnv 00:06:57.411 [Pipeline] } 00:06:57.425 [Pipeline] // stage 00:06:57.435 [Pipeline] catchError 00:06:57.437 [Pipeline] { 00:06:57.452 [Pipeline] timeout 00:06:57.453 Timeout set to expire in 1 hr 0 min 00:06:57.454 [Pipeline] { 00:06:57.469 [Pipeline] stage 00:06:57.471 [Pipeline] { (Tests) 00:06:57.486 [Pipeline] sh 00:06:57.772 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:57.772 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:57.773 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:57.773 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:06:57.773 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:57.773 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:57.773 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:06:57.773 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:57.773 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:57.773 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:57.773 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:06:57.773 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:57.773 + source /etc/os-release 00:06:57.773 ++ NAME='Fedora Linux' 00:06:57.773 ++ VERSION='39 (Cloud Edition)' 00:06:57.773 ++ ID=fedora 00:06:57.773 ++ VERSION_ID=39 00:06:57.773 ++ VERSION_CODENAME= 00:06:57.773 ++ PLATFORM_ID=platform:f39 00:06:57.773 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:57.773 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:57.773 ++ LOGO=fedora-logo-icon 00:06:57.773 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:57.773 ++ HOME_URL=https://fedoraproject.org/ 00:06:57.773 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:57.773 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:57.773 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:57.773 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:57.773 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:57.773 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:57.773 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:57.773 ++ SUPPORT_END=2024-11-12 00:06:57.773 ++ VARIANT='Cloud Edition' 00:06:57.773 ++ VARIANT_ID=cloud 00:06:57.773 + uname -a 00:06:57.773 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:57.773 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:58.711 Hugepages 00:06:58.711 node hugesize free / total 00:06:58.711 node0 1048576kB 0 / 0 00:06:58.711 node0 2048kB 0 / 0 00:06:58.711 node1 1048576kB 0 / 0 00:06:58.711 node1 2048kB 0 / 0 00:06:58.711 00:06:58.711 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:58.711 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:58.711 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:58.711 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:58.711 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:58.711 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:58.711 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:58.711 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:58.711 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:58.711 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:58.711 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:58.711 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:58.711 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:58.711 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:58.711 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:58.711 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:58.711 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:58.711 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:58.971 + rm -f /tmp/spdk-ld-path 00:06:58.971 + source autorun-spdk.conf 00:06:58.971 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:58.971 ++ SPDK_TEST_NVMF=1 00:06:58.971 ++ SPDK_TEST_NVME_CLI=1 00:06:58.971 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:58.971 ++ SPDK_TEST_NVMF_NICS=e810 00:06:58.971 ++ SPDK_TEST_VFIOUSER=1 00:06:58.971 ++ SPDK_RUN_UBSAN=1 00:06:58.971 ++ NET_TYPE=phy 00:06:58.971 ++ RUN_NIGHTLY=0 00:06:58.971 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:58.971 + [[ -n '' ]] 00:06:58.971 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.971 + for M in /var/spdk/build-*-manifest.txt 00:06:58.971 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:58.971 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:58.971 + for M in /var/spdk/build-*-manifest.txt 00:06:58.971 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:58.971 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:58.971 + for M in /var/spdk/build-*-manifest.txt 00:06:58.971 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:58.971 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:58.971 ++ uname 00:06:58.971 + [[ Linux == \L\i\n\u\x ]] 00:06:58.971 + sudo dmesg -T 00:06:58.971 + sudo dmesg --clear 00:06:58.971 + dmesg_pid=410246 00:06:58.971 + [[ Fedora Linux == FreeBSD ]] 00:06:58.971 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.971 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:58.971 + sudo dmesg -Tw 00:06:58.971 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:58.971 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:06:58.971 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:06:58.971 + [[ -x /usr/src/fio-static/fio ]] 00:06:58.971 + export FIO_BIN=/usr/src/fio-static/fio 00:06:58.971 + FIO_BIN=/usr/src/fio-static/fio 00:06:58.971 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:58.971 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:58.971 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:58.971 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.971 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:58.971 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:58.971 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.971 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:58.971 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:58.971 18:02:46 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:58.971 18:02:46 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:06:58.971 18:02:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:58.971 18:02:46 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:58.971 18:02:46 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:58.971 18:02:46 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:58.971 18:02:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:58.971 18:02:46 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:58.971 18:02:46 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:58.971 18:02:46 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.971 18:02:46 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.971 18:02:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.971 18:02:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.971 18:02:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.971 18:02:46 -- paths/export.sh@5 -- $ export PATH 00:06:58.971 18:02:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.972 18:02:46 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:58.972 18:02:46 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:58.972 18:02:46 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732640566.XXXXXX 00:06:58.972 18:02:46 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732640566.9R0OBA 00:06:58.972 18:02:46 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:58.972 18:02:46 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:58.972 18:02:46 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:06:58.972 18:02:46 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:58.972 18:02:46 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:58.972 18:02:46 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:58.972 18:02:46 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:58.972 18:02:46 -- common/autotest_common.sh@10 -- $ set +x 00:06:58.972 18:02:46 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:58.972 18:02:46 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:58.972 18:02:46 -- pm/common@17 -- $ local monitor 00:06:58.972 18:02:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.972 18:02:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.972 18:02:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.972 18:02:46 -- pm/common@21 -- $ date +%s 00:06:58.972 18:02:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:58.972 18:02:46 -- pm/common@21 -- $ date +%s 00:06:58.972 18:02:46 -- pm/common@25 -- $ sleep 1 00:06:58.972 18:02:46 -- pm/common@21 -- $ date +%s 00:06:58.972 18:02:46 -- pm/common@21 -- $ date +%s 00:06:58.972 18:02:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732640566 00:06:58.972 18:02:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732640566 00:06:58.972 18:02:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732640566 00:06:58.972 18:02:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732640566 00:06:58.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732640566_collect-cpu-load.pm.log 00:06:58.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732640566_collect-vmstat.pm.log 00:06:58.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732640566_collect-cpu-temp.pm.log 00:06:58.972 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732640566_collect-bmc-pm.bmc.pm.log 00:07:00.354 18:02:47 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:07:00.354 18:02:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:00.354 18:02:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:00.354 18:02:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:00.354 18:02:47 -- spdk/autobuild.sh@16 -- $ date -u 00:07:00.354 Tue Nov 26 05:02:47 PM UTC 2024 00:07:00.354 18:02:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:00.354 v25.01-pre-256-g3c5c3d590 00:07:00.354 18:02:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:00.354 18:02:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:00.354 18:02:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:00.354 18:02:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:00.354 18:02:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:00.354 18:02:47 -- common/autotest_common.sh@10 -- $ set +x 00:07:00.354 ************************************ 00:07:00.354 START TEST ubsan 00:07:00.354 ************************************ 00:07:00.354 18:02:47 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:07:00.354 using ubsan 00:07:00.354 00:07:00.354 real 0m0.000s 00:07:00.354 user 0m0.000s 00:07:00.354 sys 0m0.000s 00:07:00.354 18:02:47 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:00.354 18:02:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:00.354 ************************************ 00:07:00.354 END TEST ubsan 00:07:00.354 ************************************ 00:07:00.354 18:02:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:00.354 18:02:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:00.354 18:02:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:00.354 18:02:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:00.354 18:02:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:00.354 18:02:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:00.354 18:02:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:00.354 18:02:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:00.354 18:02:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:07:00.354 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:00.354 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:00.614 Using 'verbs' RDMA provider 00:07:11.215 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:21.254 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:21.512 Creating mk/config.mk...done. 00:07:21.512 Creating mk/cc.flags.mk...done. 00:07:21.512 Type 'make' to build. 00:07:21.512 18:03:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:07:21.512 18:03:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:07:21.512 18:03:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:07:21.512 18:03:09 -- common/autotest_common.sh@10 -- $ set +x 00:07:21.512 ************************************ 00:07:21.512 START TEST make 00:07:21.512 ************************************ 00:07:21.512 18:03:09 make -- common/autotest_common.sh@1129 -- $ make -j48 00:07:21.769 make[1]: Nothing to be done for 'all'. 00:07:23.690 The Meson build system 00:07:23.690 Version: 1.5.0 00:07:23.690 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:07:23.690 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:23.690 Build type: native build 00:07:23.690 Project name: libvfio-user 00:07:23.690 Project version: 0.0.1 00:07:23.690 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:23.690 C linker for the host machine: cc ld.bfd 2.40-14 00:07:23.690 Host machine cpu family: x86_64 00:07:23.690 Host machine cpu: x86_64 00:07:23.690 Run-time dependency threads found: YES 00:07:23.690 Library dl found: YES 00:07:23.690 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:23.690 Run-time dependency json-c found: YES 0.17 00:07:23.690 Run-time dependency cmocka found: YES 1.1.7 00:07:23.690 Program pytest-3 found: NO 00:07:23.690 Program flake8 found: NO 00:07:23.690 Program misspell-fixer found: NO 00:07:23.690 Program restructuredtext-lint found: NO 00:07:23.690 Program valgrind found: YES (/usr/bin/valgrind) 00:07:23.690 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:23.690 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:23.690 Compiler for C supports arguments -Wwrite-strings: YES 00:07:23.690 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:23.690 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:07:23.690 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:07:23.690 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:07:23.690 Build targets in project: 8 00:07:23.690 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:07:23.690 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:07:23.690 00:07:23.690 libvfio-user 0.0.1 00:07:23.690 00:07:23.690 User defined options 00:07:23.690 buildtype : debug 00:07:23.690 default_library: shared 00:07:23.690 libdir : /usr/local/lib 00:07:23.690 00:07:23.690 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:24.264 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:24.526 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:07:24.527 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:07:24.527 [3/37] Compiling C object samples/null.p/null.c.o 00:07:24.527 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:07:24.527 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:07:24.527 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:07:24.527 [7/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:07:24.785 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:07:24.785 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:07:24.785 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:07:24.785 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:07:24.785 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:07:24.785 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:07:24.785 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:07:24.785 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:07:24.785 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:07:24.785 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:07:24.785 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:07:24.785 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:07:24.785 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:07:24.785 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:07:24.785 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:07:24.785 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:07:24.785 [24/37] Compiling C object samples/server.p/server.c.o 00:07:24.785 [25/37] Compiling C object samples/client.p/client.c.o 00:07:24.785 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:07:24.785 [27/37] Linking target samples/client 00:07:25.048 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:07:25.048 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:07:25.048 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:07:25.048 [31/37] Linking target test/unit_tests 00:07:25.309 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:07:25.309 [33/37] Linking target samples/null 00:07:25.309 [34/37] Linking target samples/server 00:07:25.309 [35/37] Linking target samples/gpio-pci-idio-16 00:07:25.309 [36/37] Linking target samples/lspci 00:07:25.309 [37/37] Linking target samples/shadow_ioeventfd_server 00:07:25.309 INFO: autodetecting backend as ninja 00:07:25.309 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:25.309 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:07:26.255 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:07:26.255 ninja: no work to do. 00:07:31.522 The Meson build system 00:07:31.522 Version: 1.5.0 00:07:31.522 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:07:31.522 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:07:31.522 Build type: native build 00:07:31.522 Program cat found: YES (/usr/bin/cat) 00:07:31.522 Project name: DPDK 00:07:31.522 Project version: 24.03.0 00:07:31.522 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:31.522 C linker for the host machine: cc ld.bfd 2.40-14 00:07:31.522 Host machine cpu family: x86_64 00:07:31.522 Host machine cpu: x86_64 00:07:31.522 Message: ## Building in Developer Mode ## 00:07:31.522 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:31.522 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:07:31.522 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:31.522 Program python3 found: YES (/usr/bin/python3) 00:07:31.522 Program cat found: YES (/usr/bin/cat) 00:07:31.522 Compiler for C supports arguments -march=native: YES 00:07:31.522 Checking for size of "void *" : 8 00:07:31.522 Checking for size of "void *" : 8 (cached) 00:07:31.522 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:31.522 Library m found: YES 00:07:31.522 Library numa found: YES 00:07:31.522 Has header "numaif.h" : YES 00:07:31.522 Library fdt found: NO 00:07:31.522 Library execinfo found: NO 00:07:31.522 Has header "execinfo.h" : YES 00:07:31.522 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:31.522 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:31.522 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:31.522 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:31.522 Run-time dependency openssl found: YES 3.1.1 00:07:31.522 Run-time dependency libpcap found: YES 1.10.4 00:07:31.522 Has header "pcap.h" with dependency libpcap: YES 00:07:31.522 Compiler for C supports arguments -Wcast-qual: YES 00:07:31.522 Compiler for C supports arguments -Wdeprecated: YES 00:07:31.522 Compiler for C supports arguments -Wformat: YES 00:07:31.522 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:31.522 Compiler for C supports arguments -Wformat-security: NO 00:07:31.522 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:31.522 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:31.522 Compiler for C supports arguments -Wnested-externs: YES 00:07:31.522 Compiler for C supports arguments -Wold-style-definition: YES 00:07:31.522 Compiler for C supports arguments -Wpointer-arith: YES 00:07:31.522 Compiler for C supports arguments -Wsign-compare: YES 00:07:31.522 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:31.522 Compiler for C supports arguments -Wundef: YES 00:07:31.522 Compiler for C supports arguments -Wwrite-strings: YES 00:07:31.522 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:31.522 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:31.522 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:31.522 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:31.522 Program objdump found: YES (/usr/bin/objdump) 00:07:31.522 Compiler for C supports arguments -mavx512f: YES 00:07:31.522 Checking if "AVX512 checking" compiles: YES 00:07:31.522 Fetching value of define "__SSE4_2__" : 1 00:07:31.522 Fetching value of define "__AES__" : 1 00:07:31.522 Fetching value of define "__AVX__" : 1 00:07:31.522 Fetching value of define "__AVX2__" : (undefined) 00:07:31.522 Fetching value of define "__AVX512BW__" : (undefined) 00:07:31.522 Fetching value of define "__AVX512CD__" : (undefined) 00:07:31.522 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:31.522 Fetching value of define "__AVX512F__" : (undefined) 00:07:31.522 Fetching value of define "__AVX512VL__" : (undefined) 00:07:31.522 Fetching value of define "__PCLMUL__" : 1 00:07:31.522 Fetching value of define "__RDRND__" : 1 00:07:31.522 Fetching value of define "__RDSEED__" : (undefined) 00:07:31.522 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:31.522 Fetching value of define "__znver1__" : (undefined) 00:07:31.522 Fetching value of define "__znver2__" : (undefined) 00:07:31.522 Fetching value of define "__znver3__" : (undefined) 00:07:31.522 Fetching value of define "__znver4__" : (undefined) 00:07:31.522 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:31.523 Message: lib/log: Defining dependency "log" 00:07:31.523 Message: lib/kvargs: Defining dependency "kvargs" 00:07:31.523 Message: lib/telemetry: Defining dependency "telemetry" 00:07:31.523 Checking for function "getentropy" : NO 00:07:31.523 Message: lib/eal: Defining dependency "eal" 00:07:31.523 Message: lib/ring: Defining dependency "ring" 00:07:31.523 Message: lib/rcu: Defining dependency "rcu" 00:07:31.523 Message: lib/mempool: Defining dependency "mempool" 00:07:31.523 Message: lib/mbuf: Defining dependency "mbuf" 00:07:31.523 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:31.523 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:31.523 Compiler for C supports arguments -mpclmul: YES 00:07:31.523 Compiler for C supports arguments -maes: YES 00:07:31.523 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:31.523 Compiler for C supports arguments -mavx512bw: YES 00:07:31.523 Compiler for C supports arguments -mavx512dq: YES 00:07:31.523 Compiler for C supports arguments -mavx512vl: YES 00:07:31.523 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:31.523 Compiler for C supports arguments -mavx2: YES 00:07:31.523 Compiler for C supports arguments -mavx: YES 00:07:31.523 Message: lib/net: Defining dependency "net" 00:07:31.523 Message: lib/meter: Defining dependency "meter" 00:07:31.523 Message: lib/ethdev: Defining dependency "ethdev" 00:07:31.523 Message: lib/pci: Defining dependency "pci" 00:07:31.523 Message: lib/cmdline: Defining dependency "cmdline" 00:07:31.523 Message: lib/hash: Defining dependency "hash" 00:07:31.523 Message: lib/timer: Defining dependency "timer" 00:07:31.523 Message: lib/compressdev: Defining dependency "compressdev" 00:07:31.523 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:31.523 Message: lib/dmadev: Defining dependency "dmadev" 00:07:31.523 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:31.523 Message: lib/power: Defining dependency "power" 00:07:31.523 Message: lib/reorder: Defining dependency "reorder" 00:07:31.523 Message: lib/security: Defining dependency "security" 00:07:31.523 Has header "linux/userfaultfd.h" : YES 00:07:31.523 Has header "linux/vduse.h" : YES 00:07:31.523 Message: lib/vhost: Defining dependency "vhost" 00:07:31.523 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:31.523 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:31.523 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:31.523 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:31.523 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:31.523 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:31.523 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:31.523 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:31.523 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:31.523 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:31.523 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:31.523 Configuring doxy-api-html.conf using configuration 00:07:31.523 Configuring doxy-api-man.conf using configuration 00:07:31.523 Program mandb found: YES (/usr/bin/mandb) 00:07:31.523 Program sphinx-build found: NO 00:07:31.523 Configuring rte_build_config.h using configuration 00:07:31.523 Message: 00:07:31.523 ================= 00:07:31.523 Applications Enabled 00:07:31.523 ================= 00:07:31.523 00:07:31.523 apps: 00:07:31.523 00:07:31.523 00:07:31.523 Message: 00:07:31.523 ================= 00:07:31.523 Libraries Enabled 00:07:31.523 ================= 00:07:31.523 00:07:31.523 libs: 00:07:31.523 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:31.523 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:31.523 cryptodev, dmadev, power, reorder, security, vhost, 00:07:31.523 00:07:31.523 Message: 00:07:31.523 =============== 00:07:31.523 Drivers Enabled 00:07:31.523 =============== 00:07:31.523 00:07:31.523 common: 00:07:31.523 00:07:31.523 bus: 00:07:31.523 pci, vdev, 00:07:31.523 mempool: 00:07:31.523 ring, 00:07:31.523 dma: 00:07:31.523 00:07:31.523 net: 00:07:31.523 00:07:31.523 crypto: 00:07:31.523 00:07:31.523 compress: 00:07:31.523 00:07:31.523 vdpa: 00:07:31.523 00:07:31.523 00:07:31.523 Message: 00:07:31.523 ================= 00:07:31.523 Content Skipped 00:07:31.523 ================= 00:07:31.523 00:07:31.523 apps: 00:07:31.523 dumpcap: explicitly disabled via build config 00:07:31.523 graph: explicitly disabled via build config 00:07:31.523 pdump: explicitly disabled via build config 00:07:31.523 proc-info: explicitly disabled via build config 00:07:31.523 test-acl: explicitly disabled via build config 00:07:31.523 test-bbdev: explicitly disabled via build config 00:07:31.523 test-cmdline: explicitly disabled via build config 00:07:31.523 test-compress-perf: explicitly disabled via build config 00:07:31.523 test-crypto-perf: explicitly disabled via build config 00:07:31.523 test-dma-perf: explicitly disabled via build config 00:07:31.523 test-eventdev: explicitly disabled via build config 00:07:31.523 test-fib: explicitly disabled via build config 00:07:31.523 test-flow-perf: explicitly disabled via build config 00:07:31.523 test-gpudev: explicitly disabled via build config 00:07:31.523 test-mldev: explicitly disabled via build config 00:07:31.523 test-pipeline: explicitly disabled via build config 00:07:31.523 test-pmd: explicitly disabled via build config 00:07:31.523 test-regex: explicitly disabled via build config 00:07:31.523 test-sad: explicitly disabled via build config 00:07:31.523 test-security-perf: explicitly disabled via build config 00:07:31.523 00:07:31.523 libs: 00:07:31.523 argparse: explicitly disabled via build config 00:07:31.523 metrics: explicitly disabled via build config 00:07:31.523 acl: explicitly disabled via build config 00:07:31.523 bbdev: explicitly disabled via build config 00:07:31.523 bitratestats: explicitly disabled via build config 00:07:31.523 bpf: explicitly disabled via build config 00:07:31.523 cfgfile: explicitly disabled via build config 00:07:31.523 distributor: explicitly disabled via build config 00:07:31.523 efd: explicitly disabled via build config 00:07:31.523 eventdev: explicitly disabled via build config 00:07:31.523 dispatcher: explicitly disabled via build config 00:07:31.523 gpudev: explicitly disabled via build config 00:07:31.523 gro: explicitly disabled via build config 00:07:31.523 gso: explicitly disabled via build config 00:07:31.523 ip_frag: explicitly disabled via build config 00:07:31.523 jobstats: explicitly disabled via build config 00:07:31.523 latencystats: explicitly disabled via build config 00:07:31.523 lpm: explicitly disabled via build config 00:07:31.523 member: explicitly disabled via build config 00:07:31.523 pcapng: explicitly disabled via build config 00:07:31.523 rawdev: explicitly disabled via build config 00:07:31.523 regexdev: explicitly disabled via build config 00:07:31.523 mldev: explicitly disabled via build config 00:07:31.523 rib: explicitly disabled via build config 00:07:31.523 sched: explicitly disabled via build config 00:07:31.523 stack: explicitly disabled via build config 00:07:31.523 ipsec: explicitly disabled via build config 00:07:31.523 pdcp: explicitly disabled via build config 00:07:31.523 fib: explicitly disabled via build config 00:07:31.523 port: explicitly disabled via build config 00:07:31.523 pdump: explicitly disabled via build config 00:07:31.523 table: explicitly disabled via build config 00:07:31.523 pipeline: explicitly disabled via build config 00:07:31.523 graph: explicitly disabled via build config 00:07:31.523 node: explicitly disabled via build config 00:07:31.523 00:07:31.523 drivers: 00:07:31.523 common/cpt: not in enabled drivers build config 00:07:31.523 common/dpaax: not in enabled drivers build config 00:07:31.523 common/iavf: not in enabled drivers build config 00:07:31.523 common/idpf: not in enabled drivers build config 00:07:31.523 common/ionic: not in enabled drivers build config 00:07:31.523 common/mvep: not in enabled drivers build config 00:07:31.523 common/octeontx: not in enabled drivers build config 00:07:31.523 bus/auxiliary: not in enabled drivers build config 00:07:31.523 bus/cdx: not in enabled drivers build config 00:07:31.523 bus/dpaa: not in enabled drivers build config 00:07:31.523 bus/fslmc: not in enabled drivers build config 00:07:31.523 bus/ifpga: not in enabled drivers build config 00:07:31.523 bus/platform: not in enabled drivers build config 00:07:31.523 bus/uacce: not in enabled drivers build config 00:07:31.523 bus/vmbus: not in enabled drivers build config 00:07:31.523 common/cnxk: not in enabled drivers build config 00:07:31.523 common/mlx5: not in enabled drivers build config 00:07:31.523 common/nfp: not in enabled drivers build config 00:07:31.523 common/nitrox: not in enabled drivers build config 00:07:31.523 common/qat: not in enabled drivers build config 00:07:31.523 common/sfc_efx: not in enabled drivers build config 00:07:31.523 mempool/bucket: not in enabled drivers build config 00:07:31.523 mempool/cnxk: not in enabled drivers build config 00:07:31.523 mempool/dpaa: not in enabled drivers build config 00:07:31.523 mempool/dpaa2: not in enabled drivers build config 00:07:31.523 mempool/octeontx: not in enabled drivers build config 00:07:31.523 mempool/stack: not in enabled drivers build config 00:07:31.523 dma/cnxk: not in enabled drivers build config 00:07:31.523 dma/dpaa: not in enabled drivers build config 00:07:31.523 dma/dpaa2: not in enabled drivers build config 00:07:31.523 dma/hisilicon: not in enabled drivers build config 00:07:31.523 dma/idxd: not in enabled drivers build config 00:07:31.523 dma/ioat: not in enabled drivers build config 00:07:31.523 dma/skeleton: not in enabled drivers build config 00:07:31.523 net/af_packet: not in enabled drivers build config 00:07:31.523 net/af_xdp: not in enabled drivers build config 00:07:31.523 net/ark: not in enabled drivers build config 00:07:31.523 net/atlantic: not in enabled drivers build config 00:07:31.523 net/avp: not in enabled drivers build config 00:07:31.523 net/axgbe: not in enabled drivers build config 00:07:31.523 net/bnx2x: not in enabled drivers build config 00:07:31.523 net/bnxt: not in enabled drivers build config 00:07:31.523 net/bonding: not in enabled drivers build config 00:07:31.523 net/cnxk: not in enabled drivers build config 00:07:31.523 net/cpfl: not in enabled drivers build config 00:07:31.523 net/cxgbe: not in enabled drivers build config 00:07:31.524 net/dpaa: not in enabled drivers build config 00:07:31.524 net/dpaa2: not in enabled drivers build config 00:07:31.524 net/e1000: not in enabled drivers build config 00:07:31.524 net/ena: not in enabled drivers build config 00:07:31.524 net/enetc: not in enabled drivers build config 00:07:31.524 net/enetfec: not in enabled drivers build config 00:07:31.524 net/enic: not in enabled drivers build config 00:07:31.524 net/failsafe: not in enabled drivers build config 00:07:31.524 net/fm10k: not in enabled drivers build config 00:07:31.524 net/gve: not in enabled drivers build config 00:07:31.524 net/hinic: not in enabled drivers build config 00:07:31.524 net/hns3: not in enabled drivers build config 00:07:31.524 net/i40e: not in enabled drivers build config 00:07:31.524 net/iavf: not in enabled drivers build config 00:07:31.524 net/ice: not in enabled drivers build config 00:07:31.524 net/idpf: not in enabled drivers build config 00:07:31.524 net/igc: not in enabled drivers build config 00:07:31.524 net/ionic: not in enabled drivers build config 00:07:31.524 net/ipn3ke: not in enabled drivers build config 00:07:31.524 net/ixgbe: not in enabled drivers build config 00:07:31.524 net/mana: not in enabled drivers build config 00:07:31.524 net/memif: not in enabled drivers build config 00:07:31.524 net/mlx4: not in enabled drivers build config 00:07:31.524 net/mlx5: not in enabled drivers build config 00:07:31.524 net/mvneta: not in enabled drivers build config 00:07:31.524 net/mvpp2: not in enabled drivers build config 00:07:31.524 net/netvsc: not in enabled drivers build config 00:07:31.524 net/nfb: not in enabled drivers build config 00:07:31.524 net/nfp: not in enabled drivers build config 00:07:31.524 net/ngbe: not in enabled drivers build config 00:07:31.524 net/null: not in enabled drivers build config 00:07:31.524 net/octeontx: not in enabled drivers build config 00:07:31.524 net/octeon_ep: not in enabled drivers build config 00:07:31.524 net/pcap: not in enabled drivers build config 00:07:31.524 net/pfe: not in enabled drivers build config 00:07:31.524 net/qede: not in enabled drivers build config 00:07:31.524 net/ring: not in enabled drivers build config 00:07:31.524 net/sfc: not in enabled drivers build config 00:07:31.524 net/softnic: not in enabled drivers build config 00:07:31.524 net/tap: not in enabled drivers build config 00:07:31.524 net/thunderx: not in enabled drivers build config 00:07:31.524 net/txgbe: not in enabled drivers build config 00:07:31.524 net/vdev_netvsc: not in enabled drivers build config 00:07:31.524 net/vhost: not in enabled drivers build config 00:07:31.524 net/virtio: not in enabled drivers build config 00:07:31.524 net/vmxnet3: not in enabled drivers build config 00:07:31.524 raw/*: missing internal dependency, "rawdev" 00:07:31.524 crypto/armv8: not in enabled drivers build config 00:07:31.524 crypto/bcmfs: not in enabled drivers build config 00:07:31.524 crypto/caam_jr: not in enabled drivers build config 00:07:31.524 crypto/ccp: not in enabled drivers build config 00:07:31.524 crypto/cnxk: not in enabled drivers build config 00:07:31.524 crypto/dpaa_sec: not in enabled drivers build config 00:07:31.524 crypto/dpaa2_sec: not in enabled drivers build config 00:07:31.524 crypto/ipsec_mb: not in enabled drivers build config 00:07:31.524 crypto/mlx5: not in enabled drivers build config 00:07:31.524 crypto/mvsam: not in enabled drivers build config 00:07:31.524 crypto/nitrox: not in enabled drivers build config 00:07:31.524 crypto/null: not in enabled drivers build config 00:07:31.524 crypto/octeontx: not in enabled drivers build config 00:07:31.524 crypto/openssl: not in enabled drivers build config 00:07:31.524 crypto/scheduler: not in enabled drivers build config 00:07:31.524 crypto/uadk: not in enabled drivers build config 00:07:31.524 crypto/virtio: not in enabled drivers build config 00:07:31.524 compress/isal: not in enabled drivers build config 00:07:31.524 compress/mlx5: not in enabled drivers build config 00:07:31.524 compress/nitrox: not in enabled drivers build config 00:07:31.524 compress/octeontx: not in enabled drivers build config 00:07:31.524 compress/zlib: not in enabled drivers build config 00:07:31.524 regex/*: missing internal dependency, "regexdev" 00:07:31.524 ml/*: missing internal dependency, "mldev" 00:07:31.524 vdpa/ifc: not in enabled drivers build config 00:07:31.524 vdpa/mlx5: not in enabled drivers build config 00:07:31.524 vdpa/nfp: not in enabled drivers build config 00:07:31.524 vdpa/sfc: not in enabled drivers build config 00:07:31.524 event/*: missing internal dependency, "eventdev" 00:07:31.524 baseband/*: missing internal dependency, "bbdev" 00:07:31.524 gpu/*: missing internal dependency, "gpudev" 00:07:31.524 00:07:31.524 00:07:31.524 Build targets in project: 85 00:07:31.524 00:07:31.524 DPDK 24.03.0 00:07:31.524 00:07:31.524 User defined options 00:07:31.524 buildtype : debug 00:07:31.524 default_library : shared 00:07:31.524 libdir : lib 00:07:31.524 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:31.524 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:31.524 c_link_args : 00:07:31.524 cpu_instruction_set: native 00:07:31.524 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:07:31.524 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:07:31.524 enable_docs : false 00:07:31.524 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:31.524 enable_kmods : false 00:07:31.524 max_lcores : 128 00:07:31.524 tests : false 00:07:31.524 00:07:31.524 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:31.524 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:07:31.788 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:31.788 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:31.788 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:31.788 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:31.788 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:31.788 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:31.788 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:31.788 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:31.788 [9/268] Linking static target lib/librte_kvargs.a 00:07:31.788 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:31.788 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:31.788 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:31.788 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:31.788 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:31.788 [15/268] Linking static target lib/librte_log.a 00:07:31.788 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:32.357 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.617 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:32.617 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:32.617 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:32.617 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:32.617 [22/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:32.617 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:32.617 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:32.617 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:32.617 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:32.617 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:32.617 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:32.617 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:32.617 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:32.617 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:32.617 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:32.617 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:32.617 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:32.617 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:32.617 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:32.617 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:32.617 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:32.617 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:32.617 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:32.617 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:32.617 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:32.617 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:32.617 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:32.617 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:32.617 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:32.617 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:32.617 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:32.617 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:32.884 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:32.884 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:32.884 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:32.884 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:32.884 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:32.884 [55/268] Linking static target lib/librte_telemetry.a 00:07:32.884 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:32.884 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:32.884 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:32.884 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:32.884 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:32.884 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:32.884 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:33.142 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:33.142 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:33.142 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:33.142 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.142 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:33.142 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:33.142 [69/268] Linking target lib/librte_log.so.24.1 00:07:33.142 [70/268] Linking static target lib/librte_pci.a 00:07:33.406 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:33.406 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:33.406 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:33.406 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:33.406 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:33.406 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:33.406 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:33.406 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:33.666 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:33.666 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:33.666 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:33.666 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:33.666 [83/268] Linking target lib/librte_kvargs.so.24.1 00:07:33.666 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:33.666 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:33.666 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:33.666 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:33.666 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:33.666 [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:33.666 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:33.666 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:33.666 [92/268] Linking static target lib/librte_ring.a 00:07:33.666 [93/268] Linking static target lib/librte_meter.a 00:07:33.666 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:33.666 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:33.666 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:33.666 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:33.666 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:33.666 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:33.666 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:33.666 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:33.666 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:33.667 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:33.667 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:33.667 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:33.667 [106/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.667 [107/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.931 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:33.931 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:33.931 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:33.931 [111/268] Linking target lib/librte_telemetry.so.24.1 00:07:33.931 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:33.931 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:33.931 [114/268] Linking static target lib/librte_rcu.a 00:07:33.931 [115/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:33.931 [116/268] Linking static target lib/librte_eal.a 00:07:33.931 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:33.931 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:33.931 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:33.931 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:33.931 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:33.931 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:33.931 [123/268] Linking static target lib/librte_mempool.a 00:07:33.931 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:33.931 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:34.195 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:34.195 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:34.195 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:34.195 [129/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.195 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:34.195 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:34.195 [132/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.195 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:34.195 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:34.459 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:34.459 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:34.459 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:34.459 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:34.460 [139/268] Linking static target lib/librte_net.a 00:07:34.460 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:34.460 [141/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.460 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:34.460 [143/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:34.460 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:34.460 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:34.460 [146/268] Linking static target lib/librte_cmdline.a 00:07:34.719 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:34.719 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:34.719 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:34.719 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:34.719 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:34.719 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:34.719 [153/268] Linking static target lib/librte_timer.a 00:07:34.719 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:34.719 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:34.719 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:34.720 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:34.978 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.978 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:34.978 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:34.978 [161/268] Linking static target lib/librte_dmadev.a 00:07:34.978 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:34.978 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:34.978 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:34.978 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:34.978 [166/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:34.978 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:34.978 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:35.235 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:35.235 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:35.235 [171/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.235 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:35.235 [173/268] Linking static target lib/librte_power.a 00:07:35.235 [174/268] Linking static target lib/librte_compressdev.a 00:07:35.235 [175/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.235 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:35.235 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:35.235 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:35.235 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:35.235 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:35.235 [181/268] Linking static target lib/librte_hash.a 00:07:35.235 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:35.235 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:35.235 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:35.494 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:35.494 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.494 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:35.494 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:35.494 [189/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.494 [190/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:35.494 [191/268] Linking static target lib/librte_mbuf.a 00:07:35.494 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:35.494 [193/268] Linking static target lib/librte_security.a 00:07:35.494 [194/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:35.494 [195/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:35.494 [196/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:35.494 [197/268] Linking static target lib/librte_reorder.a 00:07:35.494 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:35.494 [199/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.494 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:35.494 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:35.494 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:35.494 [203/268] Linking static target drivers/librte_bus_vdev.a 00:07:35.752 [204/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:35.752 [205/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:35.752 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:35.752 [207/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.752 [208/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:35.752 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:35.752 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:35.752 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:35.752 [212/268] Linking static target drivers/librte_bus_pci.a 00:07:35.752 [213/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.752 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.752 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.752 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:35.752 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:36.010 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:36.010 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.010 [220/268] Linking static target drivers/librte_mempool_ring.a 00:07:36.010 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:36.010 [222/268] Linking static target lib/librte_ethdev.a 00:07:36.010 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:36.267 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:36.267 [225/268] Linking static target lib/librte_cryptodev.a 00:07:36.267 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:37.201 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:38.572 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:40.470 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.470 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:40.470 [231/268] Linking target lib/librte_eal.so.24.1 00:07:40.470 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:40.470 [233/268] Linking target lib/librte_ring.so.24.1 00:07:40.470 [234/268] Linking target lib/librte_pci.so.24.1 00:07:40.470 [235/268] Linking target lib/librte_meter.so.24.1 00:07:40.470 [236/268] Linking target lib/librte_timer.so.24.1 00:07:40.470 [237/268] Linking target lib/librte_dmadev.so.24.1 00:07:40.470 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:40.728 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:40.728 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:40.728 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:40.728 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:40.728 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:40.728 [244/268] Linking target lib/librte_rcu.so.24.1 00:07:40.728 [245/268] Linking target lib/librte_mempool.so.24.1 00:07:40.728 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:40.728 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:40.728 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:40.728 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:40.728 [250/268] Linking target lib/librte_mbuf.so.24.1 00:07:40.986 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:40.986 [252/268] Linking target lib/librte_reorder.so.24.1 00:07:40.986 [253/268] Linking target lib/librte_net.so.24.1 00:07:40.986 [254/268] Linking target lib/librte_compressdev.so.24.1 00:07:40.986 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:07:41.243 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:41.243 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:41.243 [258/268] Linking target lib/librte_hash.so.24.1 00:07:41.243 [259/268] Linking target lib/librte_security.so.24.1 00:07:41.243 [260/268] Linking target lib/librte_cmdline.so.24.1 00:07:41.243 [261/268] Linking target lib/librte_ethdev.so.24.1 00:07:41.243 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:41.243 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:41.501 [264/268] Linking target lib/librte_power.so.24.1 00:07:44.781 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:44.781 [266/268] Linking static target lib/librte_vhost.a 00:07:45.381 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:45.381 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:45.381 INFO: autodetecting backend as ninja 00:07:45.381 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:08:07.330 CC lib/ut/ut.o 00:08:07.330 CC lib/ut_mock/mock.o 00:08:07.330 CC lib/log/log.o 00:08:07.330 CC lib/log/log_flags.o 00:08:07.330 CC lib/log/log_deprecated.o 00:08:07.330 LIB libspdk_ut.a 00:08:07.330 LIB libspdk_ut_mock.a 00:08:07.330 LIB libspdk_log.a 00:08:07.330 SO libspdk_ut.so.2.0 00:08:07.330 SO libspdk_ut_mock.so.6.0 00:08:07.330 SO libspdk_log.so.7.1 00:08:07.330 SYMLINK libspdk_ut.so 00:08:07.330 SYMLINK libspdk_ut_mock.so 00:08:07.330 SYMLINK libspdk_log.so 00:08:07.330 CXX lib/trace_parser/trace.o 00:08:07.330 CC lib/dma/dma.o 00:08:07.330 CC lib/ioat/ioat.o 00:08:07.330 CC lib/util/base64.o 00:08:07.330 CC lib/util/bit_array.o 00:08:07.330 CC lib/util/cpuset.o 00:08:07.330 CC lib/util/crc16.o 00:08:07.330 CC lib/util/crc32.o 00:08:07.330 CC lib/util/crc32c.o 00:08:07.330 CC lib/util/crc32_ieee.o 00:08:07.330 CC lib/util/crc64.o 00:08:07.330 CC lib/util/dif.o 00:08:07.330 CC lib/util/fd.o 00:08:07.330 CC lib/util/fd_group.o 00:08:07.330 CC lib/util/file.o 00:08:07.330 CC lib/util/hexlify.o 00:08:07.330 CC lib/util/iov.o 00:08:07.330 CC lib/util/math.o 00:08:07.330 CC lib/util/net.o 00:08:07.330 CC lib/util/pipe.o 00:08:07.330 CC lib/util/strerror_tls.o 00:08:07.330 CC lib/util/string.o 00:08:07.330 CC lib/util/uuid.o 00:08:07.330 CC lib/util/xor.o 00:08:07.330 CC lib/util/zipf.o 00:08:07.330 CC lib/util/md5.o 00:08:07.330 CC lib/vfio_user/host/vfio_user_pci.o 00:08:07.330 CC lib/vfio_user/host/vfio_user.o 00:08:07.330 LIB libspdk_dma.a 00:08:07.330 SO libspdk_dma.so.5.0 00:08:07.330 SYMLINK libspdk_dma.so 00:08:07.330 LIB libspdk_ioat.a 00:08:07.331 SO libspdk_ioat.so.7.0 00:08:07.331 SYMLINK libspdk_ioat.so 00:08:07.331 LIB libspdk_vfio_user.a 00:08:07.331 SO libspdk_vfio_user.so.5.0 00:08:07.331 SYMLINK libspdk_vfio_user.so 00:08:07.331 LIB libspdk_util.a 00:08:07.331 SO libspdk_util.so.10.1 00:08:07.331 SYMLINK libspdk_util.so 00:08:07.331 CC lib/env_dpdk/env.o 00:08:07.331 CC lib/json/json_parse.o 00:08:07.331 CC lib/conf/conf.o 00:08:07.331 CC lib/rdma_utils/rdma_utils.o 00:08:07.331 CC lib/json/json_util.o 00:08:07.331 CC lib/env_dpdk/memory.o 00:08:07.331 CC lib/env_dpdk/pci.o 00:08:07.331 CC lib/json/json_write.o 00:08:07.331 CC lib/env_dpdk/init.o 00:08:07.331 CC lib/env_dpdk/threads.o 00:08:07.331 CC lib/idxd/idxd.o 00:08:07.331 CC lib/env_dpdk/pci_ioat.o 00:08:07.331 CC lib/idxd/idxd_user.o 00:08:07.331 CC lib/env_dpdk/pci_virtio.o 00:08:07.331 CC lib/vmd/vmd.o 00:08:07.331 CC lib/idxd/idxd_kernel.o 00:08:07.331 CC lib/env_dpdk/pci_vmd.o 00:08:07.331 CC lib/vmd/led.o 00:08:07.331 CC lib/env_dpdk/pci_idxd.o 00:08:07.331 CC lib/env_dpdk/pci_event.o 00:08:07.331 CC lib/env_dpdk/sigbus_handler.o 00:08:07.331 CC lib/env_dpdk/pci_dpdk.o 00:08:07.331 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:07.331 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:07.331 LIB libspdk_trace_parser.a 00:08:07.331 SO libspdk_trace_parser.so.6.0 00:08:07.331 SYMLINK libspdk_trace_parser.so 00:08:07.331 LIB libspdk_conf.a 00:08:07.331 SO libspdk_conf.so.6.0 00:08:07.331 LIB libspdk_rdma_utils.a 00:08:07.331 LIB libspdk_json.a 00:08:07.331 SO libspdk_rdma_utils.so.1.0 00:08:07.331 SYMLINK libspdk_conf.so 00:08:07.331 SO libspdk_json.so.6.0 00:08:07.331 SYMLINK libspdk_rdma_utils.so 00:08:07.331 SYMLINK libspdk_json.so 00:08:07.331 CC lib/rdma_provider/common.o 00:08:07.331 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:07.331 CC lib/jsonrpc/jsonrpc_server.o 00:08:07.331 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:07.331 CC lib/jsonrpc/jsonrpc_client.o 00:08:07.331 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:07.331 LIB libspdk_idxd.a 00:08:07.331 SO libspdk_idxd.so.12.1 00:08:07.331 LIB libspdk_vmd.a 00:08:07.331 SYMLINK libspdk_idxd.so 00:08:07.331 SO libspdk_vmd.so.6.0 00:08:07.331 SYMLINK libspdk_vmd.so 00:08:07.331 LIB libspdk_rdma_provider.a 00:08:07.331 SO libspdk_rdma_provider.so.7.0 00:08:07.331 LIB libspdk_jsonrpc.a 00:08:07.331 SYMLINK libspdk_rdma_provider.so 00:08:07.331 SO libspdk_jsonrpc.so.6.0 00:08:07.331 SYMLINK libspdk_jsonrpc.so 00:08:07.588 CC lib/rpc/rpc.o 00:08:07.588 LIB libspdk_rpc.a 00:08:07.588 SO libspdk_rpc.so.6.0 00:08:07.845 SYMLINK libspdk_rpc.so 00:08:07.846 CC lib/keyring/keyring.o 00:08:07.846 CC lib/keyring/keyring_rpc.o 00:08:07.846 CC lib/trace/trace.o 00:08:07.846 CC lib/notify/notify.o 00:08:07.846 CC lib/trace/trace_flags.o 00:08:07.846 CC lib/notify/notify_rpc.o 00:08:07.846 CC lib/trace/trace_rpc.o 00:08:08.107 LIB libspdk_notify.a 00:08:08.107 SO libspdk_notify.so.6.0 00:08:08.107 LIB libspdk_keyring.a 00:08:08.107 SYMLINK libspdk_notify.so 00:08:08.107 SO libspdk_keyring.so.2.0 00:08:08.107 LIB libspdk_trace.a 00:08:08.107 SO libspdk_trace.so.11.0 00:08:08.107 SYMLINK libspdk_keyring.so 00:08:08.364 SYMLINK libspdk_trace.so 00:08:08.364 LIB libspdk_env_dpdk.a 00:08:08.364 CC lib/thread/thread.o 00:08:08.364 CC lib/thread/iobuf.o 00:08:08.364 SO libspdk_env_dpdk.so.15.1 00:08:08.364 CC lib/sock/sock.o 00:08:08.364 CC lib/sock/sock_rpc.o 00:08:08.620 SYMLINK libspdk_env_dpdk.so 00:08:08.878 LIB libspdk_sock.a 00:08:08.878 SO libspdk_sock.so.10.0 00:08:08.878 SYMLINK libspdk_sock.so 00:08:09.136 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:09.136 CC lib/nvme/nvme_ctrlr.o 00:08:09.136 CC lib/nvme/nvme_fabric.o 00:08:09.136 CC lib/nvme/nvme_ns_cmd.o 00:08:09.136 CC lib/nvme/nvme_ns.o 00:08:09.136 CC lib/nvme/nvme_pcie_common.o 00:08:09.136 CC lib/nvme/nvme_pcie.o 00:08:09.136 CC lib/nvme/nvme_qpair.o 00:08:09.136 CC lib/nvme/nvme.o 00:08:09.136 CC lib/nvme/nvme_quirks.o 00:08:09.136 CC lib/nvme/nvme_transport.o 00:08:09.136 CC lib/nvme/nvme_discovery.o 00:08:09.136 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:09.136 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:09.136 CC lib/nvme/nvme_tcp.o 00:08:09.136 CC lib/nvme/nvme_opal.o 00:08:09.136 CC lib/nvme/nvme_io_msg.o 00:08:09.136 CC lib/nvme/nvme_poll_group.o 00:08:09.136 CC lib/nvme/nvme_zns.o 00:08:09.136 CC lib/nvme/nvme_stubs.o 00:08:09.136 CC lib/nvme/nvme_auth.o 00:08:09.136 CC lib/nvme/nvme_vfio_user.o 00:08:09.136 CC lib/nvme/nvme_cuse.o 00:08:09.136 CC lib/nvme/nvme_rdma.o 00:08:10.069 LIB libspdk_thread.a 00:08:10.069 SO libspdk_thread.so.11.0 00:08:10.069 SYMLINK libspdk_thread.so 00:08:10.327 CC lib/blob/blobstore.o 00:08:10.327 CC lib/vfu_tgt/tgt_endpoint.o 00:08:10.327 CC lib/blob/request.o 00:08:10.327 CC lib/virtio/virtio.o 00:08:10.327 CC lib/fsdev/fsdev.o 00:08:10.327 CC lib/accel/accel.o 00:08:10.327 CC lib/blob/zeroes.o 00:08:10.327 CC lib/init/json_config.o 00:08:10.327 CC lib/vfu_tgt/tgt_rpc.o 00:08:10.327 CC lib/virtio/virtio_vhost_user.o 00:08:10.327 CC lib/blob/blob_bs_dev.o 00:08:10.327 CC lib/init/subsystem.o 00:08:10.327 CC lib/accel/accel_rpc.o 00:08:10.327 CC lib/fsdev/fsdev_io.o 00:08:10.327 CC lib/virtio/virtio_vfio_user.o 00:08:10.327 CC lib/fsdev/fsdev_rpc.o 00:08:10.327 CC lib/init/subsystem_rpc.o 00:08:10.327 CC lib/accel/accel_sw.o 00:08:10.327 CC lib/init/rpc.o 00:08:10.327 CC lib/virtio/virtio_pci.o 00:08:10.585 LIB libspdk_init.a 00:08:10.585 SO libspdk_init.so.6.0 00:08:10.585 LIB libspdk_virtio.a 00:08:10.585 LIB libspdk_vfu_tgt.a 00:08:10.585 SO libspdk_virtio.so.7.0 00:08:10.585 SO libspdk_vfu_tgt.so.3.0 00:08:10.585 SYMLINK libspdk_init.so 00:08:10.842 SYMLINK libspdk_virtio.so 00:08:10.842 SYMLINK libspdk_vfu_tgt.so 00:08:10.842 CC lib/event/app.o 00:08:10.842 CC lib/event/reactor.o 00:08:10.842 CC lib/event/log_rpc.o 00:08:10.842 CC lib/event/app_rpc.o 00:08:10.842 CC lib/event/scheduler_static.o 00:08:11.099 LIB libspdk_fsdev.a 00:08:11.099 SO libspdk_fsdev.so.2.0 00:08:11.099 SYMLINK libspdk_fsdev.so 00:08:11.357 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:11.357 LIB libspdk_event.a 00:08:11.357 SO libspdk_event.so.14.0 00:08:11.357 SYMLINK libspdk_event.so 00:08:11.616 LIB libspdk_accel.a 00:08:11.616 LIB libspdk_nvme.a 00:08:11.616 SO libspdk_accel.so.16.0 00:08:11.616 SYMLINK libspdk_accel.so 00:08:11.616 SO libspdk_nvme.so.15.0 00:08:11.872 CC lib/bdev/bdev.o 00:08:11.873 CC lib/bdev/bdev_rpc.o 00:08:11.873 CC lib/bdev/bdev_zone.o 00:08:11.873 CC lib/bdev/part.o 00:08:11.873 CC lib/bdev/scsi_nvme.o 00:08:11.873 SYMLINK libspdk_nvme.so 00:08:11.873 LIB libspdk_fuse_dispatcher.a 00:08:11.873 SO libspdk_fuse_dispatcher.so.1.0 00:08:11.873 SYMLINK libspdk_fuse_dispatcher.so 00:08:13.769 LIB libspdk_blob.a 00:08:13.769 SO libspdk_blob.so.12.0 00:08:13.769 SYMLINK libspdk_blob.so 00:08:13.769 CC lib/blobfs/blobfs.o 00:08:13.769 CC lib/blobfs/tree.o 00:08:13.769 CC lib/lvol/lvol.o 00:08:14.708 LIB libspdk_blobfs.a 00:08:14.708 LIB libspdk_bdev.a 00:08:14.708 SO libspdk_blobfs.so.11.0 00:08:14.708 LIB libspdk_lvol.a 00:08:14.708 SO libspdk_bdev.so.17.0 00:08:14.708 SO libspdk_lvol.so.11.0 00:08:14.708 SYMLINK libspdk_blobfs.so 00:08:14.708 SYMLINK libspdk_lvol.so 00:08:14.708 SYMLINK libspdk_bdev.so 00:08:14.708 CC lib/ublk/ublk.o 00:08:14.708 CC lib/scsi/dev.o 00:08:14.708 CC lib/nvmf/ctrlr.o 00:08:14.708 CC lib/scsi/lun.o 00:08:14.708 CC lib/nvmf/ctrlr_discovery.o 00:08:14.708 CC lib/scsi/port.o 00:08:14.708 CC lib/ublk/ublk_rpc.o 00:08:14.708 CC lib/nvmf/ctrlr_bdev.o 00:08:14.708 CC lib/scsi/scsi.o 00:08:14.708 CC lib/nvmf/subsystem.o 00:08:14.708 CC lib/scsi/scsi_bdev.o 00:08:14.708 CC lib/nvmf/nvmf.o 00:08:14.708 CC lib/nvmf/nvmf_rpc.o 00:08:14.708 CC lib/scsi/scsi_pr.o 00:08:14.708 CC lib/nbd/nbd.o 00:08:14.708 CC lib/nvmf/transport.o 00:08:14.708 CC lib/scsi/scsi_rpc.o 00:08:14.708 CC lib/nvmf/tcp.o 00:08:14.708 CC lib/nbd/nbd_rpc.o 00:08:14.708 CC lib/ftl/ftl_core.o 00:08:14.708 CC lib/scsi/task.o 00:08:14.708 CC lib/nvmf/stubs.o 00:08:14.708 CC lib/ftl/ftl_init.o 00:08:14.708 CC lib/nvmf/mdns_server.o 00:08:14.708 CC lib/nvmf/vfio_user.o 00:08:14.708 CC lib/ftl/ftl_debug.o 00:08:14.708 CC lib/ftl/ftl_layout.o 00:08:14.708 CC lib/ftl/ftl_io.o 00:08:14.708 CC lib/nvmf/rdma.o 00:08:14.708 CC lib/nvmf/auth.o 00:08:14.708 CC lib/ftl/ftl_sb.o 00:08:14.708 CC lib/ftl/ftl_l2p.o 00:08:14.708 CC lib/ftl/ftl_l2p_flat.o 00:08:14.708 CC lib/ftl/ftl_nv_cache.o 00:08:14.708 CC lib/ftl/ftl_band.o 00:08:14.708 CC lib/ftl/ftl_band_ops.o 00:08:14.708 CC lib/ftl/ftl_writer.o 00:08:14.708 CC lib/ftl/ftl_rq.o 00:08:14.708 CC lib/ftl/ftl_reloc.o 00:08:14.708 CC lib/ftl/ftl_l2p_cache.o 00:08:14.708 CC lib/ftl/ftl_p2l.o 00:08:14.708 CC lib/ftl/ftl_p2l_log.o 00:08:14.708 CC lib/ftl/mngt/ftl_mngt.o 00:08:14.708 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:14.708 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:14.708 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:14.708 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:14.708 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:15.280 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:15.280 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:15.280 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:15.280 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:15.280 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:15.280 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:15.280 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:15.280 CC lib/ftl/utils/ftl_conf.o 00:08:15.280 CC lib/ftl/utils/ftl_md.o 00:08:15.280 CC lib/ftl/utils/ftl_mempool.o 00:08:15.280 CC lib/ftl/utils/ftl_bitmap.o 00:08:15.280 CC lib/ftl/utils/ftl_property.o 00:08:15.280 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:15.280 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:15.280 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:15.280 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:15.280 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:15.280 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:15.280 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:15.540 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:15.540 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:15.540 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:15.540 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:15.540 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:15.540 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:15.540 CC lib/ftl/base/ftl_base_dev.o 00:08:15.540 CC lib/ftl/base/ftl_base_bdev.o 00:08:15.540 CC lib/ftl/ftl_trace.o 00:08:15.797 LIB libspdk_nbd.a 00:08:15.797 SO libspdk_nbd.so.7.0 00:08:15.797 LIB libspdk_scsi.a 00:08:15.797 SYMLINK libspdk_nbd.so 00:08:15.797 SO libspdk_scsi.so.9.0 00:08:15.797 SYMLINK libspdk_scsi.so 00:08:16.055 LIB libspdk_ublk.a 00:08:16.055 SO libspdk_ublk.so.3.0 00:08:16.055 CC lib/iscsi/conn.o 00:08:16.055 CC lib/vhost/vhost.o 00:08:16.055 CC lib/vhost/vhost_rpc.o 00:08:16.055 CC lib/iscsi/init_grp.o 00:08:16.055 CC lib/vhost/vhost_scsi.o 00:08:16.055 CC lib/iscsi/iscsi.o 00:08:16.055 CC lib/vhost/vhost_blk.o 00:08:16.055 CC lib/iscsi/param.o 00:08:16.055 CC lib/vhost/rte_vhost_user.o 00:08:16.055 CC lib/iscsi/portal_grp.o 00:08:16.055 CC lib/iscsi/tgt_node.o 00:08:16.055 CC lib/iscsi/iscsi_subsystem.o 00:08:16.056 CC lib/iscsi/iscsi_rpc.o 00:08:16.056 CC lib/iscsi/task.o 00:08:16.056 SYMLINK libspdk_ublk.so 00:08:16.313 LIB libspdk_ftl.a 00:08:16.570 SO libspdk_ftl.so.9.0 00:08:16.828 SYMLINK libspdk_ftl.so 00:08:17.396 LIB libspdk_vhost.a 00:08:17.396 SO libspdk_vhost.so.8.0 00:08:17.396 LIB libspdk_nvmf.a 00:08:17.396 SYMLINK libspdk_vhost.so 00:08:17.396 SO libspdk_nvmf.so.20.0 00:08:17.655 LIB libspdk_iscsi.a 00:08:17.655 SO libspdk_iscsi.so.8.0 00:08:17.655 SYMLINK libspdk_nvmf.so 00:08:17.655 SYMLINK libspdk_iscsi.so 00:08:17.912 CC module/env_dpdk/env_dpdk_rpc.o 00:08:17.912 CC module/vfu_device/vfu_virtio.o 00:08:17.912 CC module/vfu_device/vfu_virtio_blk.o 00:08:17.912 CC module/vfu_device/vfu_virtio_scsi.o 00:08:17.912 CC module/vfu_device/vfu_virtio_rpc.o 00:08:17.913 CC module/vfu_device/vfu_virtio_fs.o 00:08:18.170 CC module/accel/error/accel_error.o 00:08:18.170 CC module/accel/ioat/accel_ioat.o 00:08:18.170 CC module/keyring/linux/keyring.o 00:08:18.170 CC module/accel/error/accel_error_rpc.o 00:08:18.170 CC module/accel/ioat/accel_ioat_rpc.o 00:08:18.170 CC module/keyring/linux/keyring_rpc.o 00:08:18.170 CC module/sock/posix/posix.o 00:08:18.170 CC module/keyring/file/keyring.o 00:08:18.170 CC module/accel/dsa/accel_dsa.o 00:08:18.170 CC module/keyring/file/keyring_rpc.o 00:08:18.170 CC module/accel/iaa/accel_iaa.o 00:08:18.170 CC module/accel/dsa/accel_dsa_rpc.o 00:08:18.170 CC module/fsdev/aio/fsdev_aio.o 00:08:18.170 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:18.170 CC module/scheduler/gscheduler/gscheduler.o 00:08:18.170 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:18.170 CC module/accel/iaa/accel_iaa_rpc.o 00:08:18.170 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:18.170 CC module/fsdev/aio/linux_aio_mgr.o 00:08:18.170 CC module/blob/bdev/blob_bdev.o 00:08:18.170 LIB libspdk_env_dpdk_rpc.a 00:08:18.170 SO libspdk_env_dpdk_rpc.so.6.0 00:08:18.170 SYMLINK libspdk_env_dpdk_rpc.so 00:08:18.170 LIB libspdk_keyring_linux.a 00:08:18.170 LIB libspdk_keyring_file.a 00:08:18.170 LIB libspdk_scheduler_gscheduler.a 00:08:18.170 LIB libspdk_scheduler_dpdk_governor.a 00:08:18.170 SO libspdk_keyring_linux.so.1.0 00:08:18.170 SO libspdk_keyring_file.so.2.0 00:08:18.170 SO libspdk_scheduler_gscheduler.so.4.0 00:08:18.170 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:18.429 LIB libspdk_scheduler_dynamic.a 00:08:18.429 LIB libspdk_accel_ioat.a 00:08:18.429 LIB libspdk_accel_error.a 00:08:18.429 SYMLINK libspdk_keyring_linux.so 00:08:18.429 SYMLINK libspdk_keyring_file.so 00:08:18.429 SYMLINK libspdk_scheduler_gscheduler.so 00:08:18.429 SO libspdk_scheduler_dynamic.so.4.0 00:08:18.429 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:18.429 SO libspdk_accel_ioat.so.6.0 00:08:18.429 SO libspdk_accel_error.so.2.0 00:08:18.429 SYMLINK libspdk_scheduler_dynamic.so 00:08:18.429 SYMLINK libspdk_accel_ioat.so 00:08:18.429 LIB libspdk_accel_iaa.a 00:08:18.429 LIB libspdk_accel_dsa.a 00:08:18.429 SYMLINK libspdk_accel_error.so 00:08:18.429 SO libspdk_accel_iaa.so.3.0 00:08:18.429 SO libspdk_accel_dsa.so.5.0 00:08:18.429 SYMLINK libspdk_accel_iaa.so 00:08:18.429 SYMLINK libspdk_accel_dsa.so 00:08:18.429 LIB libspdk_blob_bdev.a 00:08:18.429 SO libspdk_blob_bdev.so.12.0 00:08:18.686 SYMLINK libspdk_blob_bdev.so 00:08:18.686 LIB libspdk_vfu_device.a 00:08:18.686 SO libspdk_vfu_device.so.3.0 00:08:18.686 SYMLINK libspdk_vfu_device.so 00:08:18.946 CC module/bdev/gpt/gpt.o 00:08:18.946 CC module/bdev/gpt/vbdev_gpt.o 00:08:18.946 CC module/bdev/lvol/vbdev_lvol.o 00:08:18.946 CC module/bdev/malloc/bdev_malloc.o 00:08:18.946 CC module/bdev/delay/vbdev_delay.o 00:08:18.946 CC module/bdev/nvme/bdev_nvme.o 00:08:18.946 CC module/bdev/split/vbdev_split.o 00:08:18.946 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:18.946 CC module/bdev/passthru/vbdev_passthru.o 00:08:18.946 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:18.946 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:18.946 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:18.946 CC module/bdev/split/vbdev_split_rpc.o 00:08:18.946 CC module/bdev/error/vbdev_error.o 00:08:18.946 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:18.946 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:18.946 CC module/bdev/nvme/nvme_rpc.o 00:08:18.946 CC module/blobfs/bdev/blobfs_bdev.o 00:08:18.946 CC module/bdev/nvme/bdev_mdns_client.o 00:08:18.946 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:18.946 CC module/bdev/error/vbdev_error_rpc.o 00:08:18.946 CC module/bdev/nvme/vbdev_opal.o 00:08:18.946 CC module/bdev/iscsi/bdev_iscsi.o 00:08:18.946 CC module/bdev/raid/bdev_raid.o 00:08:18.946 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:18.946 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:18.946 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:18.946 CC module/bdev/raid/bdev_raid_rpc.o 00:08:18.946 CC module/bdev/ftl/bdev_ftl.o 00:08:18.946 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:18.946 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:18.946 CC module/bdev/raid/bdev_raid_sb.o 00:08:18.946 CC module/bdev/raid/raid0.o 00:08:18.946 CC module/bdev/aio/bdev_aio.o 00:08:18.946 CC module/bdev/null/bdev_null.o 00:08:18.946 CC module/bdev/aio/bdev_aio_rpc.o 00:08:18.946 CC module/bdev/null/bdev_null_rpc.o 00:08:18.946 CC module/bdev/raid/raid1.o 00:08:18.946 CC module/bdev/raid/concat.o 00:08:18.946 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:18.946 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:18.946 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:18.946 LIB libspdk_sock_posix.a 00:08:18.946 LIB libspdk_fsdev_aio.a 00:08:18.946 SO libspdk_sock_posix.so.6.0 00:08:18.946 SO libspdk_fsdev_aio.so.1.0 00:08:19.203 SYMLINK libspdk_fsdev_aio.so 00:08:19.203 SYMLINK libspdk_sock_posix.so 00:08:19.203 LIB libspdk_bdev_passthru.a 00:08:19.203 LIB libspdk_blobfs_bdev.a 00:08:19.203 SO libspdk_blobfs_bdev.so.6.0 00:08:19.203 SO libspdk_bdev_passthru.so.6.0 00:08:19.203 LIB libspdk_bdev_split.a 00:08:19.203 SO libspdk_bdev_split.so.6.0 00:08:19.203 LIB libspdk_bdev_error.a 00:08:19.203 SYMLINK libspdk_blobfs_bdev.so 00:08:19.203 SO libspdk_bdev_error.so.6.0 00:08:19.203 LIB libspdk_bdev_ftl.a 00:08:19.203 SYMLINK libspdk_bdev_split.so 00:08:19.203 SYMLINK libspdk_bdev_passthru.so 00:08:19.460 SO libspdk_bdev_ftl.so.6.0 00:08:19.460 LIB libspdk_bdev_null.a 00:08:19.460 LIB libspdk_bdev_gpt.a 00:08:19.460 LIB libspdk_bdev_zone_block.a 00:08:19.460 SYMLINK libspdk_bdev_error.so 00:08:19.460 SO libspdk_bdev_gpt.so.6.0 00:08:19.460 SO libspdk_bdev_null.so.6.0 00:08:19.460 SO libspdk_bdev_zone_block.so.6.0 00:08:19.460 LIB libspdk_bdev_malloc.a 00:08:19.460 SYMLINK libspdk_bdev_ftl.so 00:08:19.460 LIB libspdk_bdev_delay.a 00:08:19.460 LIB libspdk_bdev_aio.a 00:08:19.460 SYMLINK libspdk_bdev_gpt.so 00:08:19.460 SYMLINK libspdk_bdev_null.so 00:08:19.460 SO libspdk_bdev_malloc.so.6.0 00:08:19.460 SO libspdk_bdev_delay.so.6.0 00:08:19.460 SYMLINK libspdk_bdev_zone_block.so 00:08:19.460 LIB libspdk_bdev_iscsi.a 00:08:19.460 SO libspdk_bdev_aio.so.6.0 00:08:19.460 SO libspdk_bdev_iscsi.so.6.0 00:08:19.460 SYMLINK libspdk_bdev_malloc.so 00:08:19.460 SYMLINK libspdk_bdev_delay.so 00:08:19.460 SYMLINK libspdk_bdev_aio.so 00:08:19.460 SYMLINK libspdk_bdev_iscsi.so 00:08:19.460 LIB libspdk_bdev_lvol.a 00:08:19.460 LIB libspdk_bdev_virtio.a 00:08:19.460 SO libspdk_bdev_lvol.so.6.0 00:08:19.718 SO libspdk_bdev_virtio.so.6.0 00:08:19.718 SYMLINK libspdk_bdev_lvol.so 00:08:19.718 SYMLINK libspdk_bdev_virtio.so 00:08:19.977 LIB libspdk_bdev_raid.a 00:08:20.234 SO libspdk_bdev_raid.so.6.0 00:08:20.234 SYMLINK libspdk_bdev_raid.so 00:08:21.611 LIB libspdk_bdev_nvme.a 00:08:21.611 SO libspdk_bdev_nvme.so.7.1 00:08:21.611 SYMLINK libspdk_bdev_nvme.so 00:08:22.179 CC module/event/subsystems/vmd/vmd.o 00:08:22.179 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:08:22.180 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:22.180 CC module/event/subsystems/iobuf/iobuf.o 00:08:22.180 CC module/event/subsystems/sock/sock.o 00:08:22.180 CC module/event/subsystems/fsdev/fsdev.o 00:08:22.180 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:22.180 CC module/event/subsystems/scheduler/scheduler.o 00:08:22.180 CC module/event/subsystems/keyring/keyring.o 00:08:22.180 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:22.180 LIB libspdk_event_keyring.a 00:08:22.180 LIB libspdk_event_vhost_blk.a 00:08:22.180 LIB libspdk_event_fsdev.a 00:08:22.180 LIB libspdk_event_scheduler.a 00:08:22.180 LIB libspdk_event_vfu_tgt.a 00:08:22.180 LIB libspdk_event_sock.a 00:08:22.180 LIB libspdk_event_vmd.a 00:08:22.180 SO libspdk_event_keyring.so.1.0 00:08:22.180 SO libspdk_event_vhost_blk.so.3.0 00:08:22.180 SO libspdk_event_fsdev.so.1.0 00:08:22.180 SO libspdk_event_vfu_tgt.so.3.0 00:08:22.180 SO libspdk_event_scheduler.so.4.0 00:08:22.180 LIB libspdk_event_iobuf.a 00:08:22.180 SO libspdk_event_sock.so.5.0 00:08:22.180 SO libspdk_event_vmd.so.6.0 00:08:22.180 SO libspdk_event_iobuf.so.3.0 00:08:22.180 SYMLINK libspdk_event_keyring.so 00:08:22.180 SYMLINK libspdk_event_vhost_blk.so 00:08:22.180 SYMLINK libspdk_event_fsdev.so 00:08:22.180 SYMLINK libspdk_event_scheduler.so 00:08:22.180 SYMLINK libspdk_event_vfu_tgt.so 00:08:22.180 SYMLINK libspdk_event_sock.so 00:08:22.180 SYMLINK libspdk_event_vmd.so 00:08:22.180 SYMLINK libspdk_event_iobuf.so 00:08:22.474 CC module/event/subsystems/accel/accel.o 00:08:22.760 LIB libspdk_event_accel.a 00:08:22.760 SO libspdk_event_accel.so.6.0 00:08:22.760 SYMLINK libspdk_event_accel.so 00:08:22.760 CC module/event/subsystems/bdev/bdev.o 00:08:23.019 LIB libspdk_event_bdev.a 00:08:23.019 SO libspdk_event_bdev.so.6.0 00:08:23.019 SYMLINK libspdk_event_bdev.so 00:08:23.277 CC module/event/subsystems/ublk/ublk.o 00:08:23.277 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:23.277 CC module/event/subsystems/nbd/nbd.o 00:08:23.277 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:23.277 CC module/event/subsystems/scsi/scsi.o 00:08:23.535 LIB libspdk_event_ublk.a 00:08:23.535 LIB libspdk_event_nbd.a 00:08:23.535 LIB libspdk_event_scsi.a 00:08:23.535 SO libspdk_event_ublk.so.3.0 00:08:23.535 SO libspdk_event_nbd.so.6.0 00:08:23.535 SO libspdk_event_scsi.so.6.0 00:08:23.535 SYMLINK libspdk_event_ublk.so 00:08:23.535 SYMLINK libspdk_event_nbd.so 00:08:23.535 SYMLINK libspdk_event_scsi.so 00:08:23.535 LIB libspdk_event_nvmf.a 00:08:23.535 SO libspdk_event_nvmf.so.6.0 00:08:23.535 SYMLINK libspdk_event_nvmf.so 00:08:23.535 CC module/event/subsystems/iscsi/iscsi.o 00:08:23.535 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:23.794 LIB libspdk_event_vhost_scsi.a 00:08:23.794 SO libspdk_event_vhost_scsi.so.3.0 00:08:23.794 LIB libspdk_event_iscsi.a 00:08:23.794 SO libspdk_event_iscsi.so.6.0 00:08:23.794 SYMLINK libspdk_event_vhost_scsi.so 00:08:23.794 SYMLINK libspdk_event_iscsi.so 00:08:24.052 SO libspdk.so.6.0 00:08:24.052 SYMLINK libspdk.so 00:08:24.313 CC app/trace_record/trace_record.o 00:08:24.313 CC test/rpc_client/rpc_client_test.o 00:08:24.313 TEST_HEADER include/spdk/accel.h 00:08:24.313 CC app/spdk_top/spdk_top.o 00:08:24.313 TEST_HEADER include/spdk/accel_module.h 00:08:24.313 CC app/spdk_nvme_identify/identify.o 00:08:24.313 CC app/spdk_nvme_perf/perf.o 00:08:24.313 TEST_HEADER include/spdk/assert.h 00:08:24.313 TEST_HEADER include/spdk/barrier.h 00:08:24.313 TEST_HEADER include/spdk/base64.h 00:08:24.313 CXX app/trace/trace.o 00:08:24.313 TEST_HEADER include/spdk/bdev.h 00:08:24.313 TEST_HEADER include/spdk/bdev_module.h 00:08:24.313 TEST_HEADER include/spdk/bdev_zone.h 00:08:24.313 CC app/spdk_nvme_discover/discovery_aer.o 00:08:24.313 CC app/spdk_lspci/spdk_lspci.o 00:08:24.313 TEST_HEADER include/spdk/bit_array.h 00:08:24.313 TEST_HEADER include/spdk/bit_pool.h 00:08:24.313 TEST_HEADER include/spdk/blob_bdev.h 00:08:24.313 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:24.313 TEST_HEADER include/spdk/blobfs.h 00:08:24.313 TEST_HEADER include/spdk/blob.h 00:08:24.313 TEST_HEADER include/spdk/conf.h 00:08:24.313 TEST_HEADER include/spdk/config.h 00:08:24.313 TEST_HEADER include/spdk/cpuset.h 00:08:24.313 TEST_HEADER include/spdk/crc16.h 00:08:24.313 TEST_HEADER include/spdk/crc32.h 00:08:24.313 TEST_HEADER include/spdk/crc64.h 00:08:24.313 TEST_HEADER include/spdk/dif.h 00:08:24.313 TEST_HEADER include/spdk/dma.h 00:08:24.313 TEST_HEADER include/spdk/env_dpdk.h 00:08:24.313 TEST_HEADER include/spdk/endian.h 00:08:24.313 TEST_HEADER include/spdk/env.h 00:08:24.313 TEST_HEADER include/spdk/event.h 00:08:24.313 TEST_HEADER include/spdk/fd_group.h 00:08:24.313 TEST_HEADER include/spdk/file.h 00:08:24.313 TEST_HEADER include/spdk/fd.h 00:08:24.313 TEST_HEADER include/spdk/fsdev.h 00:08:24.313 TEST_HEADER include/spdk/fsdev_module.h 00:08:24.313 TEST_HEADER include/spdk/ftl.h 00:08:24.313 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:24.313 TEST_HEADER include/spdk/gpt_spec.h 00:08:24.313 TEST_HEADER include/spdk/hexlify.h 00:08:24.313 TEST_HEADER include/spdk/idxd.h 00:08:24.313 TEST_HEADER include/spdk/histogram_data.h 00:08:24.313 TEST_HEADER include/spdk/idxd_spec.h 00:08:24.313 TEST_HEADER include/spdk/init.h 00:08:24.313 TEST_HEADER include/spdk/ioat_spec.h 00:08:24.313 TEST_HEADER include/spdk/ioat.h 00:08:24.313 TEST_HEADER include/spdk/iscsi_spec.h 00:08:24.313 TEST_HEADER include/spdk/json.h 00:08:24.313 TEST_HEADER include/spdk/jsonrpc.h 00:08:24.313 TEST_HEADER include/spdk/keyring.h 00:08:24.313 TEST_HEADER include/spdk/keyring_module.h 00:08:24.313 TEST_HEADER include/spdk/likely.h 00:08:24.313 TEST_HEADER include/spdk/log.h 00:08:24.313 TEST_HEADER include/spdk/lvol.h 00:08:24.313 TEST_HEADER include/spdk/md5.h 00:08:24.313 TEST_HEADER include/spdk/memory.h 00:08:24.313 TEST_HEADER include/spdk/mmio.h 00:08:24.313 TEST_HEADER include/spdk/net.h 00:08:24.313 TEST_HEADER include/spdk/nbd.h 00:08:24.313 TEST_HEADER include/spdk/notify.h 00:08:24.313 TEST_HEADER include/spdk/nvme.h 00:08:24.313 TEST_HEADER include/spdk/nvme_intel.h 00:08:24.313 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:24.313 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:24.313 TEST_HEADER include/spdk/nvme_spec.h 00:08:24.313 TEST_HEADER include/spdk/nvme_zns.h 00:08:24.313 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:24.313 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:24.313 TEST_HEADER include/spdk/nvmf.h 00:08:24.313 TEST_HEADER include/spdk/nvmf_spec.h 00:08:24.313 TEST_HEADER include/spdk/nvmf_transport.h 00:08:24.313 TEST_HEADER include/spdk/opal_spec.h 00:08:24.314 TEST_HEADER include/spdk/opal.h 00:08:24.314 TEST_HEADER include/spdk/pci_ids.h 00:08:24.314 TEST_HEADER include/spdk/pipe.h 00:08:24.314 TEST_HEADER include/spdk/queue.h 00:08:24.314 TEST_HEADER include/spdk/reduce.h 00:08:24.314 TEST_HEADER include/spdk/rpc.h 00:08:24.314 TEST_HEADER include/spdk/scheduler.h 00:08:24.314 TEST_HEADER include/spdk/scsi.h 00:08:24.314 TEST_HEADER include/spdk/scsi_spec.h 00:08:24.314 TEST_HEADER include/spdk/stdinc.h 00:08:24.314 TEST_HEADER include/spdk/sock.h 00:08:24.314 TEST_HEADER include/spdk/string.h 00:08:24.314 TEST_HEADER include/spdk/thread.h 00:08:24.314 TEST_HEADER include/spdk/trace.h 00:08:24.314 TEST_HEADER include/spdk/trace_parser.h 00:08:24.314 TEST_HEADER include/spdk/tree.h 00:08:24.314 TEST_HEADER include/spdk/util.h 00:08:24.314 TEST_HEADER include/spdk/ublk.h 00:08:24.314 TEST_HEADER include/spdk/uuid.h 00:08:24.314 TEST_HEADER include/spdk/version.h 00:08:24.314 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:24.314 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:24.314 TEST_HEADER include/spdk/vhost.h 00:08:24.314 TEST_HEADER include/spdk/vmd.h 00:08:24.314 TEST_HEADER include/spdk/xor.h 00:08:24.314 TEST_HEADER include/spdk/zipf.h 00:08:24.314 CXX test/cpp_headers/accel_module.o 00:08:24.314 CXX test/cpp_headers/accel.o 00:08:24.314 CXX test/cpp_headers/assert.o 00:08:24.314 CXX test/cpp_headers/barrier.o 00:08:24.314 CXX test/cpp_headers/base64.o 00:08:24.314 CXX test/cpp_headers/bdev.o 00:08:24.314 CXX test/cpp_headers/bdev_module.o 00:08:24.314 CXX test/cpp_headers/bdev_zone.o 00:08:24.314 CXX test/cpp_headers/bit_array.o 00:08:24.314 CXX test/cpp_headers/bit_pool.o 00:08:24.314 CXX test/cpp_headers/blob_bdev.o 00:08:24.314 CXX test/cpp_headers/blobfs_bdev.o 00:08:24.314 CXX test/cpp_headers/blobfs.o 00:08:24.314 CC app/spdk_dd/spdk_dd.o 00:08:24.314 CXX test/cpp_headers/blob.o 00:08:24.314 CXX test/cpp_headers/conf.o 00:08:24.314 CXX test/cpp_headers/config.o 00:08:24.314 CXX test/cpp_headers/cpuset.o 00:08:24.314 CXX test/cpp_headers/crc16.o 00:08:24.314 CC app/nvmf_tgt/nvmf_main.o 00:08:24.314 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:24.314 CC app/iscsi_tgt/iscsi_tgt.o 00:08:24.314 CXX test/cpp_headers/crc32.o 00:08:24.314 CC examples/ioat/verify/verify.o 00:08:24.314 CC app/spdk_tgt/spdk_tgt.o 00:08:24.314 CC test/thread/poller_perf/poller_perf.o 00:08:24.314 CC examples/ioat/perf/perf.o 00:08:24.314 CC examples/util/zipf/zipf.o 00:08:24.314 CC test/app/jsoncat/jsoncat.o 00:08:24.314 CC test/env/memory/memory_ut.o 00:08:24.314 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:24.314 CC test/env/pci/pci_ut.o 00:08:24.314 CC test/app/histogram_perf/histogram_perf.o 00:08:24.314 CC test/env/vtophys/vtophys.o 00:08:24.314 CC app/fio/nvme/fio_plugin.o 00:08:24.314 CC test/app/stub/stub.o 00:08:24.314 CC test/dma/test_dma/test_dma.o 00:08:24.314 CC app/fio/bdev/fio_plugin.o 00:08:24.314 CC test/app/bdev_svc/bdev_svc.o 00:08:24.575 LINK spdk_lspci 00:08:24.575 CC test/env/mem_callbacks/mem_callbacks.o 00:08:24.575 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:24.575 LINK spdk_nvme_discover 00:08:24.575 LINK rpc_client_test 00:08:24.575 LINK jsoncat 00:08:24.575 LINK poller_perf 00:08:24.575 LINK nvmf_tgt 00:08:24.575 CXX test/cpp_headers/crc64.o 00:08:24.575 LINK vtophys 00:08:24.575 CXX test/cpp_headers/dif.o 00:08:24.838 CXX test/cpp_headers/dma.o 00:08:24.838 CXX test/cpp_headers/endian.o 00:08:24.838 LINK histogram_perf 00:08:24.838 LINK env_dpdk_post_init 00:08:24.838 LINK zipf 00:08:24.838 CXX test/cpp_headers/env_dpdk.o 00:08:24.838 CXX test/cpp_headers/env.o 00:08:24.838 LINK interrupt_tgt 00:08:24.838 CXX test/cpp_headers/event.o 00:08:24.838 CXX test/cpp_headers/fd_group.o 00:08:24.838 CXX test/cpp_headers/fd.o 00:08:24.838 CXX test/cpp_headers/file.o 00:08:24.838 LINK spdk_trace_record 00:08:24.838 CXX test/cpp_headers/fsdev.o 00:08:24.838 LINK stub 00:08:24.838 LINK verify 00:08:24.838 LINK iscsi_tgt 00:08:24.838 LINK ioat_perf 00:08:24.838 CXX test/cpp_headers/fsdev_module.o 00:08:24.838 CXX test/cpp_headers/ftl.o 00:08:24.838 CXX test/cpp_headers/gpt_spec.o 00:08:24.838 CXX test/cpp_headers/fuse_dispatcher.o 00:08:24.838 CXX test/cpp_headers/hexlify.o 00:08:24.838 LINK spdk_tgt 00:08:24.838 LINK bdev_svc 00:08:24.838 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:24.838 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:24.838 CXX test/cpp_headers/histogram_data.o 00:08:24.838 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:24.838 CXX test/cpp_headers/idxd.o 00:08:24.838 CXX test/cpp_headers/idxd_spec.o 00:08:25.099 CXX test/cpp_headers/init.o 00:08:25.099 CXX test/cpp_headers/ioat.o 00:08:25.099 CXX test/cpp_headers/ioat_spec.o 00:08:25.099 CXX test/cpp_headers/iscsi_spec.o 00:08:25.099 LINK spdk_dd 00:08:25.099 CXX test/cpp_headers/json.o 00:08:25.099 CXX test/cpp_headers/jsonrpc.o 00:08:25.099 CXX test/cpp_headers/keyring.o 00:08:25.099 CXX test/cpp_headers/keyring_module.o 00:08:25.099 CXX test/cpp_headers/likely.o 00:08:25.099 CXX test/cpp_headers/log.o 00:08:25.099 CXX test/cpp_headers/lvol.o 00:08:25.099 CXX test/cpp_headers/md5.o 00:08:25.099 LINK spdk_trace 00:08:25.099 CXX test/cpp_headers/memory.o 00:08:25.099 LINK pci_ut 00:08:25.099 CXX test/cpp_headers/mmio.o 00:08:25.099 CXX test/cpp_headers/nbd.o 00:08:25.361 CXX test/cpp_headers/net.o 00:08:25.361 CXX test/cpp_headers/notify.o 00:08:25.361 CXX test/cpp_headers/nvme.o 00:08:25.361 CXX test/cpp_headers/nvme_intel.o 00:08:25.361 CXX test/cpp_headers/nvme_ocssd.o 00:08:25.361 CC test/event/event_perf/event_perf.o 00:08:25.361 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:25.361 CXX test/cpp_headers/nvme_spec.o 00:08:25.361 CC test/event/reactor/reactor.o 00:08:25.361 CC test/event/reactor_perf/reactor_perf.o 00:08:25.361 CXX test/cpp_headers/nvme_zns.o 00:08:25.361 CXX test/cpp_headers/nvmf_cmd.o 00:08:25.361 CC test/event/app_repeat/app_repeat.o 00:08:25.361 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:25.361 CXX test/cpp_headers/nvmf.o 00:08:25.361 CC test/event/scheduler/scheduler.o 00:08:25.361 LINK spdk_nvme 00:08:25.361 LINK nvme_fuzz 00:08:25.362 CXX test/cpp_headers/nvmf_spec.o 00:08:25.362 CC examples/sock/hello_world/hello_sock.o 00:08:25.362 CXX test/cpp_headers/nvmf_transport.o 00:08:25.362 CC examples/vmd/led/led.o 00:08:25.362 CC examples/vmd/lsvmd/lsvmd.o 00:08:25.362 LINK test_dma 00:08:25.362 LINK spdk_bdev 00:08:25.362 CXX test/cpp_headers/opal.o 00:08:25.362 CXX test/cpp_headers/opal_spec.o 00:08:25.362 CC examples/thread/thread/thread_ex.o 00:08:25.362 CC examples/idxd/perf/perf.o 00:08:25.624 CXX test/cpp_headers/pci_ids.o 00:08:25.624 CXX test/cpp_headers/pipe.o 00:08:25.624 CXX test/cpp_headers/queue.o 00:08:25.624 CXX test/cpp_headers/reduce.o 00:08:25.624 CXX test/cpp_headers/rpc.o 00:08:25.624 CXX test/cpp_headers/scheduler.o 00:08:25.624 CXX test/cpp_headers/scsi.o 00:08:25.624 CXX test/cpp_headers/scsi_spec.o 00:08:25.624 CXX test/cpp_headers/sock.o 00:08:25.624 CXX test/cpp_headers/stdinc.o 00:08:25.624 LINK event_perf 00:08:25.624 LINK reactor_perf 00:08:25.624 LINK reactor 00:08:25.624 CXX test/cpp_headers/string.o 00:08:25.624 CXX test/cpp_headers/thread.o 00:08:25.624 CXX test/cpp_headers/trace.o 00:08:25.624 CXX test/cpp_headers/trace_parser.o 00:08:25.625 CXX test/cpp_headers/tree.o 00:08:25.625 LINK app_repeat 00:08:25.625 CXX test/cpp_headers/ublk.o 00:08:25.625 CXX test/cpp_headers/util.o 00:08:25.625 CXX test/cpp_headers/uuid.o 00:08:25.625 CXX test/cpp_headers/version.o 00:08:25.625 CXX test/cpp_headers/vfio_user_pci.o 00:08:25.625 LINK vhost_fuzz 00:08:25.625 CXX test/cpp_headers/vfio_user_spec.o 00:08:25.625 LINK lsvmd 00:08:25.886 LINK spdk_nvme_perf 00:08:25.886 LINK led 00:08:25.886 CXX test/cpp_headers/vhost.o 00:08:25.886 CXX test/cpp_headers/vmd.o 00:08:25.886 LINK mem_callbacks 00:08:25.886 CXX test/cpp_headers/xor.o 00:08:25.886 CXX test/cpp_headers/zipf.o 00:08:25.886 LINK spdk_nvme_identify 00:08:25.886 CC app/vhost/vhost.o 00:08:25.886 LINK scheduler 00:08:25.886 LINK hello_sock 00:08:25.886 LINK thread 00:08:25.886 LINK spdk_top 00:08:26.145 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:26.145 CC test/nvme/reset/reset.o 00:08:26.145 CC test/nvme/e2edp/nvme_dp.o 00:08:26.145 CC test/nvme/simple_copy/simple_copy.o 00:08:26.145 CC test/nvme/startup/startup.o 00:08:26.145 CC test/nvme/sgl/sgl.o 00:08:26.145 CC test/nvme/overhead/overhead.o 00:08:26.145 CC test/nvme/connect_stress/connect_stress.o 00:08:26.145 CC test/nvme/aer/aer.o 00:08:26.145 CC test/nvme/fdp/fdp.o 00:08:26.145 CC test/nvme/boot_partition/boot_partition.o 00:08:26.145 CC test/nvme/err_injection/err_injection.o 00:08:26.145 CC test/nvme/cuse/cuse.o 00:08:26.145 CC test/nvme/compliance/nvme_compliance.o 00:08:26.145 CC test/nvme/reserve/reserve.o 00:08:26.145 CC test/nvme/fused_ordering/fused_ordering.o 00:08:26.145 LINK idxd_perf 00:08:26.145 CC test/accel/dif/dif.o 00:08:26.145 CC test/blobfs/mkfs/mkfs.o 00:08:26.145 LINK vhost 00:08:26.145 CC test/lvol/esnap/esnap.o 00:08:26.403 LINK boot_partition 00:08:26.403 LINK err_injection 00:08:26.403 LINK connect_stress 00:08:26.403 LINK doorbell_aers 00:08:26.403 CC examples/nvme/hotplug/hotplug.o 00:08:26.403 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:26.403 CC examples/nvme/reconnect/reconnect.o 00:08:26.403 LINK fused_ordering 00:08:26.403 CC examples/nvme/abort/abort.o 00:08:26.403 CC examples/nvme/hello_world/hello_world.o 00:08:26.403 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:26.403 CC examples/nvme/arbitration/arbitration.o 00:08:26.403 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:26.403 LINK mkfs 00:08:26.403 LINK startup 00:08:26.403 CC examples/accel/perf/accel_perf.o 00:08:26.403 LINK nvme_dp 00:08:26.403 LINK reset 00:08:26.403 LINK sgl 00:08:26.403 LINK aer 00:08:26.403 CC examples/blob/cli/blobcli.o 00:08:26.663 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:26.663 LINK nvme_compliance 00:08:26.663 CC examples/blob/hello_world/hello_blob.o 00:08:26.663 LINK simple_copy 00:08:26.663 LINK memory_ut 00:08:26.663 LINK reserve 00:08:26.663 LINK overhead 00:08:26.663 LINK cmb_copy 00:08:26.663 LINK fdp 00:08:26.663 LINK pmr_persistence 00:08:26.948 LINK hello_world 00:08:26.948 LINK hello_blob 00:08:26.948 LINK abort 00:08:26.948 LINK hotplug 00:08:26.948 LINK hello_fsdev 00:08:26.948 LINK arbitration 00:08:26.948 LINK reconnect 00:08:26.948 LINK nvme_manage 00:08:26.948 LINK accel_perf 00:08:26.948 LINK blobcli 00:08:27.205 LINK dif 00:08:27.463 CC examples/bdev/hello_world/hello_bdev.o 00:08:27.463 CC examples/bdev/bdevperf/bdevperf.o 00:08:27.463 LINK iscsi_fuzz 00:08:27.463 CC test/bdev/bdevio/bdevio.o 00:08:27.721 LINK hello_bdev 00:08:27.721 LINK cuse 00:08:27.979 LINK bdevio 00:08:28.237 LINK bdevperf 00:08:28.495 CC examples/nvmf/nvmf/nvmf.o 00:08:29.064 LINK nvmf 00:08:31.598 LINK esnap 00:08:31.856 00:08:31.856 real 1m10.290s 00:08:31.856 user 11m51.925s 00:08:31.856 sys 2m37.506s 00:08:31.856 18:04:19 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:08:31.856 18:04:19 make -- common/autotest_common.sh@10 -- $ set +x 00:08:31.856 ************************************ 00:08:31.856 END TEST make 00:08:31.856 ************************************ 00:08:31.856 18:04:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:31.856 18:04:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:31.856 18:04:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:31.856 18:04:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:31.856 18:04:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:08:31.856 18:04:19 -- pm/common@44 -- $ pid=410290 00:08:31.856 18:04:19 -- pm/common@50 -- $ kill -TERM 410290 00:08:31.856 18:04:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:31.856 18:04:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:08:31.856 18:04:19 -- pm/common@44 -- $ pid=410292 00:08:31.856 18:04:19 -- pm/common@50 -- $ kill -TERM 410292 00:08:31.856 18:04:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:31.856 18:04:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:08:31.856 18:04:19 -- pm/common@44 -- $ pid=410294 00:08:31.856 18:04:19 -- pm/common@50 -- $ kill -TERM 410294 00:08:31.856 18:04:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:31.856 18:04:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:08:31.856 18:04:19 -- pm/common@44 -- $ pid=410323 00:08:31.856 18:04:19 -- pm/common@50 -- $ sudo -E kill -TERM 410323 00:08:31.856 18:04:19 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:31.856 18:04:19 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:08:31.856 18:04:19 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.856 18:04:19 -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.856 18:04:19 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.856 18:04:19 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.856 18:04:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.856 18:04:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.856 18:04:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.856 18:04:19 -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.856 18:04:19 -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.856 18:04:19 -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.856 18:04:19 -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.856 18:04:19 -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.856 18:04:19 -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.856 18:04:19 -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.856 18:04:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.856 18:04:19 -- scripts/common.sh@344 -- # case "$op" in 00:08:31.856 18:04:19 -- scripts/common.sh@345 -- # : 1 00:08:31.856 18:04:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.856 18:04:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.856 18:04:19 -- scripts/common.sh@365 -- # decimal 1 00:08:32.115 18:04:19 -- scripts/common.sh@353 -- # local d=1 00:08:32.115 18:04:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.115 18:04:19 -- scripts/common.sh@355 -- # echo 1 00:08:32.115 18:04:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.115 18:04:19 -- scripts/common.sh@366 -- # decimal 2 00:08:32.115 18:04:19 -- scripts/common.sh@353 -- # local d=2 00:08:32.115 18:04:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.115 18:04:19 -- scripts/common.sh@355 -- # echo 2 00:08:32.115 18:04:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.115 18:04:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.115 18:04:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.115 18:04:19 -- scripts/common.sh@368 -- # return 0 00:08:32.115 18:04:19 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.115 18:04:19 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.115 --rc genhtml_branch_coverage=1 00:08:32.115 --rc genhtml_function_coverage=1 00:08:32.115 --rc genhtml_legend=1 00:08:32.115 --rc geninfo_all_blocks=1 00:08:32.115 --rc geninfo_unexecuted_blocks=1 00:08:32.115 00:08:32.115 ' 00:08:32.115 18:04:19 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.115 --rc genhtml_branch_coverage=1 00:08:32.115 --rc genhtml_function_coverage=1 00:08:32.115 --rc genhtml_legend=1 00:08:32.115 --rc geninfo_all_blocks=1 00:08:32.115 --rc geninfo_unexecuted_blocks=1 00:08:32.115 00:08:32.115 ' 00:08:32.115 18:04:19 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.115 --rc genhtml_branch_coverage=1 00:08:32.115 --rc genhtml_function_coverage=1 00:08:32.115 --rc genhtml_legend=1 00:08:32.115 --rc geninfo_all_blocks=1 00:08:32.115 --rc geninfo_unexecuted_blocks=1 00:08:32.115 00:08:32.115 ' 00:08:32.115 18:04:19 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.115 --rc genhtml_branch_coverage=1 00:08:32.115 --rc genhtml_function_coverage=1 00:08:32.115 --rc genhtml_legend=1 00:08:32.115 --rc geninfo_all_blocks=1 00:08:32.115 --rc geninfo_unexecuted_blocks=1 00:08:32.115 00:08:32.115 ' 00:08:32.115 18:04:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.115 18:04:19 -- nvmf/common.sh@7 -- # uname -s 00:08:32.115 18:04:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.115 18:04:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.115 18:04:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.115 18:04:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.115 18:04:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.115 18:04:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.115 18:04:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.115 18:04:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.115 18:04:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.115 18:04:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.115 18:04:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.115 18:04:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.115 18:04:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.115 18:04:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.115 18:04:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.115 18:04:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.115 18:04:19 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.115 18:04:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.115 18:04:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.115 18:04:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.115 18:04:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.115 18:04:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.115 18:04:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.115 18:04:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.115 18:04:19 -- paths/export.sh@5 -- # export PATH 00:08:32.116 18:04:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.116 18:04:19 -- nvmf/common.sh@51 -- # : 0 00:08:32.116 18:04:19 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.116 18:04:19 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.116 18:04:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.116 18:04:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.116 18:04:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.116 18:04:19 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.116 18:04:19 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.116 18:04:19 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.116 18:04:19 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.116 18:04:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:32.116 18:04:19 -- spdk/autotest.sh@32 -- # uname -s 00:08:32.116 18:04:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:32.116 18:04:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:32.116 18:04:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:08:32.116 18:04:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:08:32.116 18:04:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:08:32.116 18:04:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:32.116 18:04:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:32.116 18:04:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:32.116 18:04:19 -- spdk/autotest.sh@48 -- # udevadm_pid=470455 00:08:32.116 18:04:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:32.116 18:04:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:32.116 18:04:19 -- pm/common@17 -- # local monitor 00:08:32.116 18:04:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:32.116 18:04:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:32.116 18:04:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:32.116 18:04:19 -- pm/common@21 -- # date +%s 00:08:32.116 18:04:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:32.116 18:04:19 -- pm/common@21 -- # date +%s 00:08:32.116 18:04:19 -- pm/common@25 -- # sleep 1 00:08:32.116 18:04:19 -- pm/common@21 -- # date +%s 00:08:32.116 18:04:19 -- pm/common@21 -- # date +%s 00:08:32.116 18:04:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732640659 00:08:32.116 18:04:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732640659 00:08:32.116 18:04:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732640659 00:08:32.116 18:04:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732640659 00:08:32.116 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732640659_collect-cpu-load.pm.log 00:08:32.116 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732640659_collect-vmstat.pm.log 00:08:32.116 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732640659_collect-cpu-temp.pm.log 00:08:32.116 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732640659_collect-bmc-pm.bmc.pm.log 00:08:33.054 18:04:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:33.054 18:04:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:33.054 18:04:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.054 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.054 18:04:20 -- spdk/autotest.sh@59 -- # create_test_list 00:08:33.054 18:04:20 -- common/autotest_common.sh@752 -- # xtrace_disable 00:08:33.054 18:04:20 -- common/autotest_common.sh@10 -- # set +x 00:08:33.054 18:04:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:08:33.054 18:04:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:33.054 18:04:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:33.054 18:04:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:08:33.054 18:04:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:33.054 18:04:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:33.054 18:04:20 -- common/autotest_common.sh@1457 -- # uname 00:08:33.054 18:04:20 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:08:33.054 18:04:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:33.054 18:04:20 -- common/autotest_common.sh@1477 -- # uname 00:08:33.054 18:04:20 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:08:33.054 18:04:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:33.054 18:04:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:33.054 lcov: LCOV version 1.15 00:08:33.054 18:04:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:09:05.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:05.121 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:09:11.676 18:04:58 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:11.676 18:04:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.676 18:04:58 -- common/autotest_common.sh@10 -- # set +x 00:09:11.676 18:04:58 -- spdk/autotest.sh@78 -- # rm -f 00:09:11.676 18:04:58 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:11.934 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:09:11.934 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:09:11.934 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:09:11.934 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:09:11.934 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:09:11.934 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:09:11.934 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:09:11.934 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:09:11.934 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:09:11.934 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:09:11.934 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:09:12.193 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:09:12.193 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:09:12.193 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:09:12.193 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:09:12.193 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:09:12.193 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:09:12.193 18:05:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:12.193 18:05:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:12.193 18:05:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:12.193 18:05:00 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:09:12.193 18:05:00 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:12.193 18:05:00 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:09:12.193 18:05:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:12.193 18:05:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:12.193 18:05:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:12.193 18:05:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:12.193 18:05:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:12.193 18:05:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:12.193 18:05:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:12.193 18:05:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:12.193 18:05:00 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:12.193 No valid GPT data, bailing 00:09:12.193 18:05:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:12.193 18:05:00 -- scripts/common.sh@394 -- # pt= 00:09:12.193 18:05:00 -- scripts/common.sh@395 -- # return 1 00:09:12.193 18:05:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:12.193 1+0 records in 00:09:12.193 1+0 records out 00:09:12.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00230336 s, 455 MB/s 00:09:12.193 18:05:00 -- spdk/autotest.sh@105 -- # sync 00:09:12.193 18:05:00 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:12.193 18:05:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:12.193 18:05:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:14.791 18:05:02 -- spdk/autotest.sh@111 -- # uname -s 00:09:14.791 18:05:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:14.791 18:05:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:14.791 18:05:02 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:09:15.727 Hugepages 00:09:15.727 node hugesize free / total 00:09:15.727 node0 1048576kB 0 / 0 00:09:15.727 node0 2048kB 0 / 0 00:09:15.727 node1 1048576kB 0 / 0 00:09:15.727 node1 2048kB 0 / 0 00:09:15.727 00:09:15.727 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:15.727 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:09:15.727 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:09:15.727 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:09:15.727 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:09:15.727 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:09:15.727 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:09:15.727 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:09:15.727 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:09:15.727 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:09:15.727 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:09:15.727 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:09:15.727 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:09:15.727 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:09:15.727 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:09:15.727 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:09:15.727 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:09:15.727 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:09:15.727 18:05:03 -- spdk/autotest.sh@117 -- # uname -s 00:09:15.727 18:05:03 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:15.727 18:05:03 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:15.727 18:05:03 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:17.105 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:09:17.105 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:09:17.105 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:09:17.105 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:09:17.105 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:09:17.105 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:09:17.105 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:09:17.105 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:09:17.105 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:09:17.105 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:09:17.105 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:09:17.105 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:09:17.105 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:09:17.105 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:09:17.105 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:09:17.105 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:09:18.044 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:09:18.044 18:05:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:09:18.982 18:05:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:09:18.982 18:05:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:09:18.982 18:05:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:09:18.982 18:05:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:09:18.982 18:05:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:18.982 18:05:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:18.982 18:05:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:18.982 18:05:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:18.982 18:05:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:18.982 18:05:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:18.982 18:05:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:09:18.982 18:05:06 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:09:20.357 Waiting for block devices as requested 00:09:20.357 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:09:20.357 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:09:20.357 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:09:20.357 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:09:20.357 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:09:20.616 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:09:20.616 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:09:20.616 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:09:20.875 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:09:20.875 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:09:20.875 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:09:21.133 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:09:21.133 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:09:21.133 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:09:21.397 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:09:21.397 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:09:21.397 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:09:21.397 18:05:09 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:09:21.397 18:05:09 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:09:21.398 18:05:09 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:09:21.398 18:05:09 -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme 00:09:21.668 18:05:09 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:09:21.668 18:05:09 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:09:21.668 18:05:09 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:09:21.668 18:05:09 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:09:21.668 18:05:09 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:09:21.668 18:05:09 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:09:21.668 18:05:09 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:09:21.668 18:05:09 -- common/autotest_common.sh@1531 -- # grep oacs 00:09:21.668 18:05:09 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:09:21.668 18:05:09 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:09:21.668 18:05:09 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:09:21.668 18:05:09 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:09:21.669 18:05:09 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:09:21.669 18:05:09 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:09:21.669 18:05:09 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:09:21.669 18:05:09 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:09:21.669 18:05:09 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:09:21.669 18:05:09 -- common/autotest_common.sh@1543 -- # continue 00:09:21.669 18:05:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:21.669 18:05:09 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.669 18:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:21.669 18:05:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:21.669 18:05:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.669 18:05:09 -- common/autotest_common.sh@10 -- # set +x 00:09:21.669 18:05:09 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:09:23.046 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:09:23.046 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:09:23.046 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:09:23.046 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:09:23.046 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:09:23.046 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:09:23.046 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:09:23.046 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:09:23.046 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:09:23.046 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:09:23.046 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:09:23.046 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:09:23.046 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:09:23.046 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:09:23.046 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:09:23.046 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:09:23.985 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:09:23.985 18:05:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:23.985 18:05:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.985 18:05:11 -- common/autotest_common.sh@10 -- # set +x 00:09:23.985 18:05:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:23.985 18:05:11 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:09:24.243 18:05:11 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:09:24.243 18:05:11 -- common/autotest_common.sh@1563 -- # bdfs=() 00:09:24.243 18:05:11 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:09:24.243 18:05:11 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:09:24.243 18:05:11 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:09:24.243 18:05:11 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:09:24.243 18:05:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:24.243 18:05:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:09:24.243 18:05:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:24.243 18:05:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:24.243 18:05:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:24.243 18:05:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:09:24.243 18:05:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:09:24.243 18:05:12 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:09:24.243 18:05:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:09:24.243 18:05:12 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:09:24.243 18:05:12 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:09:24.243 18:05:12 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:09:24.243 18:05:12 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:09:24.243 18:05:12 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0 00:09:24.243 18:05:12 -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]] 00:09:24.243 18:05:12 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=480998 00:09:24.243 18:05:12 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:24.243 18:05:12 -- common/autotest_common.sh@1585 -- # waitforlisten 480998 00:09:24.243 18:05:12 -- common/autotest_common.sh@835 -- # '[' -z 480998 ']' 00:09:24.243 18:05:12 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.243 18:05:12 -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.243 18:05:12 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.243 18:05:12 -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.243 18:05:12 -- common/autotest_common.sh@10 -- # set +x 00:09:24.243 [2024-11-26 18:05:12.119691] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:09:24.243 [2024-11-26 18:05:12.119789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480998 ] 00:09:24.243 [2024-11-26 18:05:12.186411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.243 [2024-11-26 18:05:12.244931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.501 18:05:12 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.501 18:05:12 -- common/autotest_common.sh@868 -- # return 0 00:09:24.501 18:05:12 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:09:24.501 18:05:12 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:09:24.501 18:05:12 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:09:27.782 nvme0n1 00:09:27.782 18:05:15 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:09:28.040 [2024-11-26 18:05:15.875205] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:09:28.040 [2024-11-26 18:05:15.875246] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:09:28.040 request: 00:09:28.040 { 00:09:28.040 "nvme_ctrlr_name": "nvme0", 00:09:28.040 "password": "test", 00:09:28.040 "method": "bdev_nvme_opal_revert", 00:09:28.040 "req_id": 1 00:09:28.040 } 00:09:28.040 Got JSON-RPC error response 00:09:28.040 response: 00:09:28.040 { 00:09:28.040 "code": -32603, 00:09:28.040 "message": "Internal error" 00:09:28.040 } 00:09:28.040 18:05:15 -- common/autotest_common.sh@1591 -- # true 00:09:28.040 18:05:15 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:09:28.040 18:05:15 -- common/autotest_common.sh@1595 -- # killprocess 480998 00:09:28.040 18:05:15 -- common/autotest_common.sh@954 -- # '[' -z 480998 ']' 00:09:28.040 18:05:15 -- common/autotest_common.sh@958 -- # kill -0 480998 00:09:28.040 18:05:15 -- common/autotest_common.sh@959 -- # uname 00:09:28.040 18:05:15 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.040 18:05:15 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 480998 00:09:28.040 18:05:15 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.040 18:05:15 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.040 18:05:15 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 480998' 00:09:28.040 killing process with pid 480998 00:09:28.040 18:05:15 -- common/autotest_common.sh@973 -- # kill 480998 00:09:28.040 18:05:15 -- common/autotest_common.sh@978 -- # wait 480998 00:09:29.936 18:05:17 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:29.936 18:05:17 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:29.936 18:05:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:29.936 18:05:17 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:29.936 18:05:17 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:29.936 18:05:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.936 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:29.936 18:05:17 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:29.936 18:05:17 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:09:29.936 18:05:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.936 18:05:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.936 18:05:17 -- common/autotest_common.sh@10 -- # set +x 00:09:29.936 ************************************ 00:09:29.936 START TEST env 00:09:29.936 ************************************ 00:09:29.936 18:05:17 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:09:29.936 * Looking for test storage... 00:09:29.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:09:29.936 18:05:17 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.936 18:05:17 env -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.936 18:05:17 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.936 18:05:17 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.936 18:05:17 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.936 18:05:17 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.936 18:05:17 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.936 18:05:17 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.936 18:05:17 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.936 18:05:17 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.936 18:05:17 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.936 18:05:17 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.936 18:05:17 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.936 18:05:17 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.936 18:05:17 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.936 18:05:17 env -- scripts/common.sh@344 -- # case "$op" in 00:09:29.936 18:05:17 env -- scripts/common.sh@345 -- # : 1 00:09:29.936 18:05:17 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.936 18:05:17 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.936 18:05:17 env -- scripts/common.sh@365 -- # decimal 1 00:09:29.936 18:05:17 env -- scripts/common.sh@353 -- # local d=1 00:09:29.936 18:05:17 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.936 18:05:17 env -- scripts/common.sh@355 -- # echo 1 00:09:29.936 18:05:17 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.936 18:05:17 env -- scripts/common.sh@366 -- # decimal 2 00:09:29.936 18:05:17 env -- scripts/common.sh@353 -- # local d=2 00:09:29.936 18:05:17 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.936 18:05:17 env -- scripts/common.sh@355 -- # echo 2 00:09:29.936 18:05:17 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.936 18:05:17 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.936 18:05:17 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.936 18:05:17 env -- scripts/common.sh@368 -- # return 0 00:09:29.936 18:05:17 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.936 18:05:17 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.936 --rc genhtml_branch_coverage=1 00:09:29.936 --rc genhtml_function_coverage=1 00:09:29.936 --rc genhtml_legend=1 00:09:29.936 --rc geninfo_all_blocks=1 00:09:29.936 --rc geninfo_unexecuted_blocks=1 00:09:29.936 00:09:29.936 ' 00:09:29.936 18:05:17 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.936 --rc genhtml_branch_coverage=1 00:09:29.936 --rc genhtml_function_coverage=1 00:09:29.937 --rc genhtml_legend=1 00:09:29.937 --rc geninfo_all_blocks=1 00:09:29.937 --rc geninfo_unexecuted_blocks=1 00:09:29.937 00:09:29.937 ' 00:09:29.937 18:05:17 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.937 --rc genhtml_branch_coverage=1 00:09:29.937 --rc genhtml_function_coverage=1 00:09:29.937 --rc genhtml_legend=1 00:09:29.937 --rc geninfo_all_blocks=1 00:09:29.937 --rc geninfo_unexecuted_blocks=1 00:09:29.937 00:09:29.937 ' 00:09:29.937 18:05:17 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.937 --rc genhtml_branch_coverage=1 00:09:29.937 --rc genhtml_function_coverage=1 00:09:29.937 --rc genhtml_legend=1 00:09:29.937 --rc geninfo_all_blocks=1 00:09:29.937 --rc geninfo_unexecuted_blocks=1 00:09:29.937 00:09:29.937 ' 00:09:29.937 18:05:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:09:29.937 18:05:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.937 18:05:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.937 18:05:17 env -- common/autotest_common.sh@10 -- # set +x 00:09:29.937 ************************************ 00:09:29.937 START TEST env_memory 00:09:29.937 ************************************ 00:09:29.937 18:05:17 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:09:29.937 00:09:29.937 00:09:29.937 CUnit - A unit testing framework for C - Version 2.1-3 00:09:29.937 http://cunit.sourceforge.net/ 00:09:29.937 00:09:29.937 00:09:29.937 Suite: memory 00:09:29.937 Test: alloc and free memory map ...[2024-11-26 18:05:17.865822] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:29.937 passed 00:09:29.937 Test: mem map translation ...[2024-11-26 18:05:17.886297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:29.937 [2024-11-26 18:05:17.886322] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:29.937 [2024-11-26 18:05:17.886362] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:29.937 [2024-11-26 18:05:17.886373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:29.937 passed 00:09:29.937 Test: mem map registration ...[2024-11-26 18:05:17.927051] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:29.937 [2024-11-26 18:05:17.927070] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:29.937 passed 00:09:30.194 Test: mem map adjacent registrations ...passed 00:09:30.194 00:09:30.194 Run Summary: Type Total Ran Passed Failed Inactive 00:09:30.194 suites 1 1 n/a 0 0 00:09:30.194 tests 4 4 4 0 0 00:09:30.194 asserts 152 152 152 0 n/a 00:09:30.194 00:09:30.194 Elapsed time = 0.141 seconds 00:09:30.194 00:09:30.194 real 0m0.150s 00:09:30.194 user 0m0.140s 00:09:30.194 sys 0m0.009s 00:09:30.194 18:05:17 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.195 18:05:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:30.195 ************************************ 00:09:30.195 END TEST env_memory 00:09:30.195 ************************************ 00:09:30.195 18:05:18 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:30.195 18:05:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.195 18:05:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.195 18:05:18 env -- common/autotest_common.sh@10 -- # set +x 00:09:30.195 ************************************ 00:09:30.195 START TEST env_vtophys 00:09:30.195 ************************************ 00:09:30.195 18:05:18 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:09:30.195 EAL: lib.eal log level changed from notice to debug 00:09:30.195 EAL: Detected lcore 0 as core 0 on socket 0 00:09:30.195 EAL: Detected lcore 1 as core 1 on socket 0 00:09:30.195 EAL: Detected lcore 2 as core 2 on socket 0 00:09:30.195 EAL: Detected lcore 3 as core 3 on socket 0 00:09:30.195 EAL: Detected lcore 4 as core 4 on socket 0 00:09:30.195 EAL: Detected lcore 5 as core 5 on socket 0 00:09:30.195 EAL: Detected lcore 6 as core 8 on socket 0 00:09:30.195 EAL: Detected lcore 7 as core 9 on socket 0 00:09:30.195 EAL: Detected lcore 8 as core 10 on socket 0 00:09:30.195 EAL: Detected lcore 9 as core 11 on socket 0 00:09:30.195 EAL: Detected lcore 10 as core 12 on socket 0 00:09:30.195 EAL: Detected lcore 11 as core 13 on socket 0 00:09:30.195 EAL: Detected lcore 12 as core 0 on socket 1 00:09:30.195 EAL: Detected lcore 13 as core 1 on socket 1 00:09:30.195 EAL: Detected lcore 14 as core 2 on socket 1 00:09:30.195 EAL: Detected lcore 15 as core 3 on socket 1 00:09:30.195 EAL: Detected lcore 16 as core 4 on socket 1 00:09:30.195 EAL: Detected lcore 17 as core 5 on socket 1 00:09:30.195 EAL: Detected lcore 18 as core 8 on socket 1 00:09:30.195 EAL: Detected lcore 19 as core 9 on socket 1 00:09:30.195 EAL: Detected lcore 20 as core 10 on socket 1 00:09:30.195 EAL: Detected lcore 21 as core 11 on socket 1 00:09:30.195 EAL: Detected lcore 22 as core 12 on socket 1 00:09:30.195 EAL: Detected lcore 23 as core 13 on socket 1 00:09:30.195 EAL: Detected lcore 24 as core 0 on socket 0 00:09:30.195 EAL: Detected lcore 25 as core 1 on socket 0 00:09:30.195 EAL: Detected lcore 26 as core 2 on socket 0 00:09:30.195 EAL: Detected lcore 27 as core 3 on socket 0 00:09:30.195 EAL: Detected lcore 28 as core 4 on socket 0 00:09:30.195 EAL: Detected lcore 29 as core 5 on socket 0 00:09:30.195 EAL: Detected lcore 30 as core 8 on socket 0 00:09:30.195 EAL: Detected lcore 31 as core 9 on socket 0 00:09:30.195 EAL: Detected lcore 32 as core 10 on socket 0 00:09:30.195 EAL: Detected lcore 33 as core 11 on socket 0 00:09:30.195 EAL: Detected lcore 34 as core 12 on socket 0 00:09:30.195 EAL: Detected lcore 35 as core 13 on socket 0 00:09:30.195 EAL: Detected lcore 36 as core 0 on socket 1 00:09:30.195 EAL: Detected lcore 37 as core 1 on socket 1 00:09:30.195 EAL: Detected lcore 38 as core 2 on socket 1 00:09:30.195 EAL: Detected lcore 39 as core 3 on socket 1 00:09:30.195 EAL: Detected lcore 40 as core 4 on socket 1 00:09:30.195 EAL: Detected lcore 41 as core 5 on socket 1 00:09:30.195 EAL: Detected lcore 42 as core 8 on socket 1 00:09:30.195 EAL: Detected lcore 43 as core 9 on socket 1 00:09:30.195 EAL: Detected lcore 44 as core 10 on socket 1 00:09:30.195 EAL: Detected lcore 45 as core 11 on socket 1 00:09:30.195 EAL: Detected lcore 46 as core 12 on socket 1 00:09:30.195 EAL: Detected lcore 47 as core 13 on socket 1 00:09:30.195 EAL: Maximum logical cores by configuration: 128 00:09:30.195 EAL: Detected CPU lcores: 48 00:09:30.195 EAL: Detected NUMA nodes: 2 00:09:30.195 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:30.195 EAL: Detected shared linkage of DPDK 00:09:30.195 EAL: No shared files mode enabled, IPC will be disabled 00:09:30.195 EAL: Bus pci wants IOVA as 'DC' 00:09:30.195 EAL: Buses did not request a specific IOVA mode. 00:09:30.195 EAL: IOMMU is available, selecting IOVA as VA mode. 00:09:30.195 EAL: Selected IOVA mode 'VA' 00:09:30.195 EAL: Probing VFIO support... 00:09:30.195 EAL: IOMMU type 1 (Type 1) is supported 00:09:30.195 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:30.195 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:30.195 EAL: VFIO support initialized 00:09:30.195 EAL: Ask a virtual area of 0x2e000 bytes 00:09:30.195 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:30.195 EAL: Setting up physically contiguous memory... 00:09:30.195 EAL: Setting maximum number of open files to 524288 00:09:30.195 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:30.195 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:09:30.195 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:30.195 EAL: Ask a virtual area of 0x61000 bytes 00:09:30.195 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:30.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:30.195 EAL: Ask a virtual area of 0x400000000 bytes 00:09:30.195 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:30.195 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:30.195 EAL: Ask a virtual area of 0x61000 bytes 00:09:30.195 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:30.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:30.195 EAL: Ask a virtual area of 0x400000000 bytes 00:09:30.195 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:30.195 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:30.195 EAL: Ask a virtual area of 0x61000 bytes 00:09:30.195 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:30.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:30.195 EAL: Ask a virtual area of 0x400000000 bytes 00:09:30.195 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:30.195 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:30.195 EAL: Ask a virtual area of 0x61000 bytes 00:09:30.195 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:30.195 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:30.195 EAL: Ask a virtual area of 0x400000000 bytes 00:09:30.195 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:30.195 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:30.195 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:09:30.195 EAL: Ask a virtual area of 0x61000 bytes 00:09:30.195 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:09:30.195 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:30.195 EAL: Ask a virtual area of 0x400000000 bytes 00:09:30.195 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:09:30.195 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:09:30.195 EAL: Ask a virtual area of 0x61000 bytes 00:09:30.195 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:09:30.195 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:30.195 EAL: Ask a virtual area of 0x400000000 bytes 00:09:30.195 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:09:30.195 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:09:30.195 EAL: Ask a virtual area of 0x61000 bytes 00:09:30.195 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:09:30.195 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:30.195 EAL: Ask a virtual area of 0x400000000 bytes 00:09:30.195 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:09:30.195 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:09:30.195 EAL: Ask a virtual area of 0x61000 bytes 00:09:30.195 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:09:30.195 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:09:30.195 EAL: Ask a virtual area of 0x400000000 bytes 00:09:30.195 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:09:30.195 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:09:30.195 EAL: Hugepages will be freed exactly as allocated. 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.195 EAL: TSC frequency is ~2700000 KHz 00:09:30.195 EAL: Main lcore 0 is ready (tid=7f2668153a00;cpuset=[0]) 00:09:30.195 EAL: Trying to obtain current memory policy. 00:09:30.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.195 EAL: Restoring previous memory policy: 0 00:09:30.195 EAL: request: mp_malloc_sync 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.195 EAL: Heap on socket 0 was expanded by 2MB 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.195 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:30.195 EAL: Mem event callback 'spdk:(nil)' registered 00:09:30.195 00:09:30.195 00:09:30.195 CUnit - A unit testing framework for C - Version 2.1-3 00:09:30.195 http://cunit.sourceforge.net/ 00:09:30.195 00:09:30.195 00:09:30.195 Suite: components_suite 00:09:30.195 Test: vtophys_malloc_test ...passed 00:09:30.195 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:30.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.195 EAL: Restoring previous memory policy: 4 00:09:30.195 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.195 EAL: request: mp_malloc_sync 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.195 EAL: Heap on socket 0 was expanded by 4MB 00:09:30.195 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.195 EAL: request: mp_malloc_sync 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.195 EAL: Heap on socket 0 was shrunk by 4MB 00:09:30.195 EAL: Trying to obtain current memory policy. 00:09:30.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.195 EAL: Restoring previous memory policy: 4 00:09:30.195 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.195 EAL: request: mp_malloc_sync 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.195 EAL: Heap on socket 0 was expanded by 6MB 00:09:30.195 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.195 EAL: request: mp_malloc_sync 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.195 EAL: Heap on socket 0 was shrunk by 6MB 00:09:30.195 EAL: Trying to obtain current memory policy. 00:09:30.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.195 EAL: Restoring previous memory policy: 4 00:09:30.195 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.195 EAL: request: mp_malloc_sync 00:09:30.195 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was expanded by 10MB 00:09:30.196 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.196 EAL: request: mp_malloc_sync 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was shrunk by 10MB 00:09:30.196 EAL: Trying to obtain current memory policy. 00:09:30.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.196 EAL: Restoring previous memory policy: 4 00:09:30.196 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.196 EAL: request: mp_malloc_sync 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was expanded by 18MB 00:09:30.196 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.196 EAL: request: mp_malloc_sync 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was shrunk by 18MB 00:09:30.196 EAL: Trying to obtain current memory policy. 00:09:30.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.196 EAL: Restoring previous memory policy: 4 00:09:30.196 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.196 EAL: request: mp_malloc_sync 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was expanded by 34MB 00:09:30.196 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.196 EAL: request: mp_malloc_sync 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was shrunk by 34MB 00:09:30.196 EAL: Trying to obtain current memory policy. 00:09:30.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.196 EAL: Restoring previous memory policy: 4 00:09:30.196 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.196 EAL: request: mp_malloc_sync 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was expanded by 66MB 00:09:30.196 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.196 EAL: request: mp_malloc_sync 00:09:30.196 EAL: No shared files mode enabled, IPC is disabled 00:09:30.196 EAL: Heap on socket 0 was shrunk by 66MB 00:09:30.196 EAL: Trying to obtain current memory policy. 00:09:30.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.453 EAL: Restoring previous memory policy: 4 00:09:30.453 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.453 EAL: request: mp_malloc_sync 00:09:30.453 EAL: No shared files mode enabled, IPC is disabled 00:09:30.453 EAL: Heap on socket 0 was expanded by 130MB 00:09:30.453 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.453 EAL: request: mp_malloc_sync 00:09:30.453 EAL: No shared files mode enabled, IPC is disabled 00:09:30.453 EAL: Heap on socket 0 was shrunk by 130MB 00:09:30.453 EAL: Trying to obtain current memory policy. 00:09:30.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.453 EAL: Restoring previous memory policy: 4 00:09:30.453 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.453 EAL: request: mp_malloc_sync 00:09:30.453 EAL: No shared files mode enabled, IPC is disabled 00:09:30.453 EAL: Heap on socket 0 was expanded by 258MB 00:09:30.453 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.453 EAL: request: mp_malloc_sync 00:09:30.453 EAL: No shared files mode enabled, IPC is disabled 00:09:30.453 EAL: Heap on socket 0 was shrunk by 258MB 00:09:30.453 EAL: Trying to obtain current memory policy. 00:09:30.453 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:30.710 EAL: Restoring previous memory policy: 4 00:09:30.710 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.710 EAL: request: mp_malloc_sync 00:09:30.710 EAL: No shared files mode enabled, IPC is disabled 00:09:30.710 EAL: Heap on socket 0 was expanded by 514MB 00:09:30.710 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.967 EAL: request: mp_malloc_sync 00:09:30.967 EAL: No shared files mode enabled, IPC is disabled 00:09:30.967 EAL: Heap on socket 0 was shrunk by 514MB 00:09:30.967 EAL: Trying to obtain current memory policy. 00:09:30.967 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:31.225 EAL: Restoring previous memory policy: 4 00:09:31.225 EAL: Calling mem event callback 'spdk:(nil)' 00:09:31.225 EAL: request: mp_malloc_sync 00:09:31.225 EAL: No shared files mode enabled, IPC is disabled 00:09:31.225 EAL: Heap on socket 0 was expanded by 1026MB 00:09:31.481 EAL: Calling mem event callback 'spdk:(nil)' 00:09:31.737 EAL: request: mp_malloc_sync 00:09:31.737 EAL: No shared files mode enabled, IPC is disabled 00:09:31.737 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:31.737 passed 00:09:31.737 00:09:31.737 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.737 suites 1 1 n/a 0 0 00:09:31.737 tests 2 2 2 0 0 00:09:31.737 asserts 497 497 497 0 n/a 00:09:31.737 00:09:31.737 Elapsed time = 1.355 seconds 00:09:31.737 EAL: Calling mem event callback 'spdk:(nil)' 00:09:31.737 EAL: request: mp_malloc_sync 00:09:31.737 EAL: No shared files mode enabled, IPC is disabled 00:09:31.737 EAL: Heap on socket 0 was shrunk by 2MB 00:09:31.737 EAL: No shared files mode enabled, IPC is disabled 00:09:31.737 EAL: No shared files mode enabled, IPC is disabled 00:09:31.737 EAL: No shared files mode enabled, IPC is disabled 00:09:31.737 00:09:31.737 real 0m1.471s 00:09:31.737 user 0m0.853s 00:09:31.737 sys 0m0.588s 00:09:31.737 18:05:19 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.737 18:05:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:31.737 ************************************ 00:09:31.737 END TEST env_vtophys 00:09:31.737 ************************************ 00:09:31.737 18:05:19 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:09:31.737 18:05:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.738 18:05:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.738 18:05:19 env -- common/autotest_common.sh@10 -- # set +x 00:09:31.738 ************************************ 00:09:31.738 START TEST env_pci 00:09:31.738 ************************************ 00:09:31.738 18:05:19 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:09:31.738 00:09:31.738 00:09:31.738 CUnit - A unit testing framework for C - Version 2.1-3 00:09:31.738 http://cunit.sourceforge.net/ 00:09:31.738 00:09:31.738 00:09:31.738 Suite: pci 00:09:31.738 Test: pci_hook ...[2024-11-26 18:05:19.563013] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 481899 has claimed it 00:09:31.738 EAL: Cannot find device (10000:00:01.0) 00:09:31.738 EAL: Failed to attach device on primary process 00:09:31.738 passed 00:09:31.738 00:09:31.738 Run Summary: Type Total Ran Passed Failed Inactive 00:09:31.738 suites 1 1 n/a 0 0 00:09:31.738 tests 1 1 1 0 0 00:09:31.738 asserts 25 25 25 0 n/a 00:09:31.738 00:09:31.738 Elapsed time = 0.022 seconds 00:09:31.738 00:09:31.738 real 0m0.035s 00:09:31.738 user 0m0.010s 00:09:31.738 sys 0m0.025s 00:09:31.738 18:05:19 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.738 18:05:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:31.738 ************************************ 00:09:31.738 END TEST env_pci 00:09:31.738 ************************************ 00:09:31.738 18:05:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:31.738 18:05:19 env -- env/env.sh@15 -- # uname 00:09:31.738 18:05:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:31.738 18:05:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:31.738 18:05:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:31.738 18:05:19 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.738 18:05:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.738 18:05:19 env -- common/autotest_common.sh@10 -- # set +x 00:09:31.738 ************************************ 00:09:31.738 START TEST env_dpdk_post_init 00:09:31.738 ************************************ 00:09:31.738 18:05:19 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:31.738 EAL: Detected CPU lcores: 48 00:09:31.738 EAL: Detected NUMA nodes: 2 00:09:31.738 EAL: Detected shared linkage of DPDK 00:09:31.738 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:31.738 EAL: Selected IOVA mode 'VA' 00:09:31.738 EAL: VFIO support initialized 00:09:31.738 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:31.995 EAL: Using IOMMU type 1 (Type 1) 00:09:31.995 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:09:31.995 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:09:31.995 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:09:31.995 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:09:31.995 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:09:31.995 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:09:31.995 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:09:31.995 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:09:32.930 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:09:32.930 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:09:32.930 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:09:32.930 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:09:32.930 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:09:32.930 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:09:32.930 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:09:32.930 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:09:32.930 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:09:36.207 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:09:36.207 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:09:36.207 Starting DPDK initialization... 00:09:36.207 Starting SPDK post initialization... 00:09:36.207 SPDK NVMe probe 00:09:36.207 Attaching to 0000:0b:00.0 00:09:36.207 Attached to 0000:0b:00.0 00:09:36.207 Cleaning up... 00:09:36.207 00:09:36.207 real 0m4.336s 00:09:36.207 user 0m2.986s 00:09:36.207 sys 0m0.415s 00:09:36.207 18:05:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.207 18:05:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:36.207 ************************************ 00:09:36.207 END TEST env_dpdk_post_init 00:09:36.207 ************************************ 00:09:36.207 18:05:23 env -- env/env.sh@26 -- # uname 00:09:36.207 18:05:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:36.207 18:05:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:36.207 18:05:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.207 18:05:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.207 18:05:24 env -- common/autotest_common.sh@10 -- # set +x 00:09:36.207 ************************************ 00:09:36.207 START TEST env_mem_callbacks 00:09:36.207 ************************************ 00:09:36.207 18:05:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:36.207 EAL: Detected CPU lcores: 48 00:09:36.207 EAL: Detected NUMA nodes: 2 00:09:36.207 EAL: Detected shared linkage of DPDK 00:09:36.207 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:36.207 EAL: Selected IOVA mode 'VA' 00:09:36.207 EAL: VFIO support initialized 00:09:36.207 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:36.207 00:09:36.207 00:09:36.207 CUnit - A unit testing framework for C - Version 2.1-3 00:09:36.207 http://cunit.sourceforge.net/ 00:09:36.207 00:09:36.207 00:09:36.207 Suite: memory 00:09:36.207 Test: test ... 00:09:36.207 register 0x200000200000 2097152 00:09:36.207 malloc 3145728 00:09:36.207 register 0x200000400000 4194304 00:09:36.207 buf 0x200000500000 len 3145728 PASSED 00:09:36.207 malloc 64 00:09:36.207 buf 0x2000004fff40 len 64 PASSED 00:09:36.207 malloc 4194304 00:09:36.207 register 0x200000800000 6291456 00:09:36.207 buf 0x200000a00000 len 4194304 PASSED 00:09:36.207 free 0x200000500000 3145728 00:09:36.207 free 0x2000004fff40 64 00:09:36.207 unregister 0x200000400000 4194304 PASSED 00:09:36.207 free 0x200000a00000 4194304 00:09:36.207 unregister 0x200000800000 6291456 PASSED 00:09:36.207 malloc 8388608 00:09:36.207 register 0x200000400000 10485760 00:09:36.207 buf 0x200000600000 len 8388608 PASSED 00:09:36.207 free 0x200000600000 8388608 00:09:36.207 unregister 0x200000400000 10485760 PASSED 00:09:36.207 passed 00:09:36.207 00:09:36.207 Run Summary: Type Total Ran Passed Failed Inactive 00:09:36.207 suites 1 1 n/a 0 0 00:09:36.207 tests 1 1 1 0 0 00:09:36.207 asserts 15 15 15 0 n/a 00:09:36.207 00:09:36.207 Elapsed time = 0.005 seconds 00:09:36.207 00:09:36.207 real 0m0.048s 00:09:36.207 user 0m0.016s 00:09:36.207 sys 0m0.032s 00:09:36.207 18:05:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.207 18:05:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:36.207 ************************************ 00:09:36.207 END TEST env_mem_callbacks 00:09:36.207 ************************************ 00:09:36.207 00:09:36.207 real 0m6.435s 00:09:36.207 user 0m4.183s 00:09:36.207 sys 0m1.309s 00:09:36.207 18:05:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.207 18:05:24 env -- common/autotest_common.sh@10 -- # set +x 00:09:36.207 ************************************ 00:09:36.207 END TEST env 00:09:36.207 ************************************ 00:09:36.207 18:05:24 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:36.207 18:05:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.207 18:05:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.207 18:05:24 -- common/autotest_common.sh@10 -- # set +x 00:09:36.207 ************************************ 00:09:36.207 START TEST rpc 00:09:36.207 ************************************ 00:09:36.207 18:05:24 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:36.207 * Looking for test storage... 00:09:36.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:36.207 18:05:24 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.207 18:05:24 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.207 18:05:24 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.484 18:05:24 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.484 18:05:24 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.484 18:05:24 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.484 18:05:24 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.484 18:05:24 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.484 18:05:24 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.484 18:05:24 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.484 18:05:24 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.484 18:05:24 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.484 18:05:24 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.484 18:05:24 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.484 18:05:24 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:36.484 18:05:24 rpc -- scripts/common.sh@345 -- # : 1 00:09:36.484 18:05:24 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.484 18:05:24 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.484 18:05:24 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:36.484 18:05:24 rpc -- scripts/common.sh@353 -- # local d=1 00:09:36.484 18:05:24 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.484 18:05:24 rpc -- scripts/common.sh@355 -- # echo 1 00:09:36.484 18:05:24 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.484 18:05:24 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:36.484 18:05:24 rpc -- scripts/common.sh@353 -- # local d=2 00:09:36.484 18:05:24 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.484 18:05:24 rpc -- scripts/common.sh@355 -- # echo 2 00:09:36.484 18:05:24 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.484 18:05:24 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.484 18:05:24 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.484 18:05:24 rpc -- scripts/common.sh@368 -- # return 0 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.484 --rc genhtml_branch_coverage=1 00:09:36.484 --rc genhtml_function_coverage=1 00:09:36.484 --rc genhtml_legend=1 00:09:36.484 --rc geninfo_all_blocks=1 00:09:36.484 --rc geninfo_unexecuted_blocks=1 00:09:36.484 00:09:36.484 ' 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.484 --rc genhtml_branch_coverage=1 00:09:36.484 --rc genhtml_function_coverage=1 00:09:36.484 --rc genhtml_legend=1 00:09:36.484 --rc geninfo_all_blocks=1 00:09:36.484 --rc geninfo_unexecuted_blocks=1 00:09:36.484 00:09:36.484 ' 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.484 --rc genhtml_branch_coverage=1 00:09:36.484 --rc genhtml_function_coverage=1 00:09:36.484 --rc genhtml_legend=1 00:09:36.484 --rc geninfo_all_blocks=1 00:09:36.484 --rc geninfo_unexecuted_blocks=1 00:09:36.484 00:09:36.484 ' 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.484 --rc genhtml_branch_coverage=1 00:09:36.484 --rc genhtml_function_coverage=1 00:09:36.484 --rc genhtml_legend=1 00:09:36.484 --rc geninfo_all_blocks=1 00:09:36.484 --rc geninfo_unexecuted_blocks=1 00:09:36.484 00:09:36.484 ' 00:09:36.484 18:05:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=482671 00:09:36.484 18:05:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:09:36.484 18:05:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:36.484 18:05:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 482671 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@835 -- # '[' -z 482671 ']' 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.484 18:05:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.484 [2024-11-26 18:05:24.349571] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:09:36.484 [2024-11-26 18:05:24.349676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482671 ] 00:09:36.484 [2024-11-26 18:05:24.414689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.484 [2024-11-26 18:05:24.470178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:36.484 [2024-11-26 18:05:24.470234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 482671' to capture a snapshot of events at runtime. 00:09:36.484 [2024-11-26 18:05:24.470262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.484 [2024-11-26 18:05:24.470273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.484 [2024-11-26 18:05:24.470282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid482671 for offline analysis/debug. 00:09:36.484 [2024-11-26 18:05:24.470829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.742 18:05:24 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.742 18:05:24 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:36.743 18:05:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:36.743 18:05:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:36.743 18:05:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:36.743 18:05:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:36.743 18:05:24 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.743 18:05:24 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.743 18:05:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 ************************************ 00:09:37.001 START TEST rpc_integrity 00:09:37.001 ************************************ 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:37.001 { 00:09:37.001 "name": "Malloc0", 00:09:37.001 "aliases": [ 00:09:37.001 "baf00530-560e-4f99-9e69-fd81e8e9f26f" 00:09:37.001 ], 00:09:37.001 "product_name": "Malloc disk", 00:09:37.001 "block_size": 512, 00:09:37.001 "num_blocks": 16384, 00:09:37.001 "uuid": "baf00530-560e-4f99-9e69-fd81e8e9f26f", 00:09:37.001 "assigned_rate_limits": { 00:09:37.001 "rw_ios_per_sec": 0, 00:09:37.001 "rw_mbytes_per_sec": 0, 00:09:37.001 "r_mbytes_per_sec": 0, 00:09:37.001 "w_mbytes_per_sec": 0 00:09:37.001 }, 00:09:37.001 "claimed": false, 00:09:37.001 "zoned": false, 00:09:37.001 "supported_io_types": { 00:09:37.001 "read": true, 00:09:37.001 "write": true, 00:09:37.001 "unmap": true, 00:09:37.001 "flush": true, 00:09:37.001 "reset": true, 00:09:37.001 "nvme_admin": false, 00:09:37.001 "nvme_io": false, 00:09:37.001 "nvme_io_md": false, 00:09:37.001 "write_zeroes": true, 00:09:37.001 "zcopy": true, 00:09:37.001 "get_zone_info": false, 00:09:37.001 "zone_management": false, 00:09:37.001 "zone_append": false, 00:09:37.001 "compare": false, 00:09:37.001 "compare_and_write": false, 00:09:37.001 "abort": true, 00:09:37.001 "seek_hole": false, 00:09:37.001 "seek_data": false, 00:09:37.001 "copy": true, 00:09:37.001 "nvme_iov_md": false 00:09:37.001 }, 00:09:37.001 "memory_domains": [ 00:09:37.001 { 00:09:37.001 "dma_device_id": "system", 00:09:37.001 "dma_device_type": 1 00:09:37.001 }, 00:09:37.001 { 00:09:37.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.001 "dma_device_type": 2 00:09:37.001 } 00:09:37.001 ], 00:09:37.001 "driver_specific": {} 00:09:37.001 } 00:09:37.001 ]' 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.001 [2024-11-26 18:05:24.867614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:37.001 [2024-11-26 18:05:24.867653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.001 [2024-11-26 18:05:24.867690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eded20 00:09:37.001 [2024-11-26 18:05:24.867703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.001 [2024-11-26 18:05:24.869034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.001 [2024-11-26 18:05:24.869055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:37.001 Passthru0 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.001 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:37.001 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:37.002 { 00:09:37.002 "name": "Malloc0", 00:09:37.002 "aliases": [ 00:09:37.002 "baf00530-560e-4f99-9e69-fd81e8e9f26f" 00:09:37.002 ], 00:09:37.002 "product_name": "Malloc disk", 00:09:37.002 "block_size": 512, 00:09:37.002 "num_blocks": 16384, 00:09:37.002 "uuid": "baf00530-560e-4f99-9e69-fd81e8e9f26f", 00:09:37.002 "assigned_rate_limits": { 00:09:37.002 "rw_ios_per_sec": 0, 00:09:37.002 "rw_mbytes_per_sec": 0, 00:09:37.002 "r_mbytes_per_sec": 0, 00:09:37.002 "w_mbytes_per_sec": 0 00:09:37.002 }, 00:09:37.002 "claimed": true, 00:09:37.002 "claim_type": "exclusive_write", 00:09:37.002 "zoned": false, 00:09:37.002 "supported_io_types": { 00:09:37.002 "read": true, 00:09:37.002 "write": true, 00:09:37.002 "unmap": true, 00:09:37.002 "flush": true, 00:09:37.002 "reset": true, 00:09:37.002 "nvme_admin": false, 00:09:37.002 "nvme_io": false, 00:09:37.002 "nvme_io_md": false, 00:09:37.002 "write_zeroes": true, 00:09:37.002 "zcopy": true, 00:09:37.002 "get_zone_info": false, 00:09:37.002 "zone_management": false, 00:09:37.002 "zone_append": false, 00:09:37.002 "compare": false, 00:09:37.002 "compare_and_write": false, 00:09:37.002 "abort": true, 00:09:37.002 "seek_hole": false, 00:09:37.002 "seek_data": false, 00:09:37.002 "copy": true, 00:09:37.002 "nvme_iov_md": false 00:09:37.002 }, 00:09:37.002 "memory_domains": [ 00:09:37.002 { 00:09:37.002 "dma_device_id": "system", 00:09:37.002 "dma_device_type": 1 00:09:37.002 }, 00:09:37.002 { 00:09:37.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.002 "dma_device_type": 2 00:09:37.002 } 00:09:37.002 ], 00:09:37.002 "driver_specific": {} 00:09:37.002 }, 00:09:37.002 { 00:09:37.002 "name": "Passthru0", 00:09:37.002 "aliases": [ 00:09:37.002 "652653ca-e994-54e2-b30c-8386d8b36615" 00:09:37.002 ], 00:09:37.002 "product_name": "passthru", 00:09:37.002 "block_size": 512, 00:09:37.002 "num_blocks": 16384, 00:09:37.002 "uuid": "652653ca-e994-54e2-b30c-8386d8b36615", 00:09:37.002 "assigned_rate_limits": { 00:09:37.002 "rw_ios_per_sec": 0, 00:09:37.002 "rw_mbytes_per_sec": 0, 00:09:37.002 "r_mbytes_per_sec": 0, 00:09:37.002 "w_mbytes_per_sec": 0 00:09:37.002 }, 00:09:37.002 "claimed": false, 00:09:37.002 "zoned": false, 00:09:37.002 "supported_io_types": { 00:09:37.002 "read": true, 00:09:37.002 "write": true, 00:09:37.002 "unmap": true, 00:09:37.002 "flush": true, 00:09:37.002 "reset": true, 00:09:37.002 "nvme_admin": false, 00:09:37.002 "nvme_io": false, 00:09:37.002 "nvme_io_md": false, 00:09:37.002 "write_zeroes": true, 00:09:37.002 "zcopy": true, 00:09:37.002 "get_zone_info": false, 00:09:37.002 "zone_management": false, 00:09:37.002 "zone_append": false, 00:09:37.002 "compare": false, 00:09:37.002 "compare_and_write": false, 00:09:37.002 "abort": true, 00:09:37.002 "seek_hole": false, 00:09:37.002 "seek_data": false, 00:09:37.002 "copy": true, 00:09:37.002 "nvme_iov_md": false 00:09:37.002 }, 00:09:37.002 "memory_domains": [ 00:09:37.002 { 00:09:37.002 "dma_device_id": "system", 00:09:37.002 "dma_device_type": 1 00:09:37.002 }, 00:09:37.002 { 00:09:37.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.002 "dma_device_type": 2 00:09:37.002 } 00:09:37.002 ], 00:09:37.002 "driver_specific": { 00:09:37.002 "passthru": { 00:09:37.002 "name": "Passthru0", 00:09:37.002 "base_bdev_name": "Malloc0" 00:09:37.002 } 00:09:37.002 } 00:09:37.002 } 00:09:37.002 ]' 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:37.002 18:05:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:37.002 00:09:37.002 real 0m0.211s 00:09:37.002 user 0m0.132s 00:09:37.002 sys 0m0.024s 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.002 18:05:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.002 ************************************ 00:09:37.002 END TEST rpc_integrity 00:09:37.002 ************************************ 00:09:37.002 18:05:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:37.002 18:05:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.002 18:05:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.002 18:05:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 ************************************ 00:09:37.260 START TEST rpc_plugins 00:09:37.260 ************************************ 00:09:37.260 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:37.260 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:37.260 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.260 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.260 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:37.260 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:37.260 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.260 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.260 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.260 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:37.260 { 00:09:37.260 "name": "Malloc1", 00:09:37.260 "aliases": [ 00:09:37.260 "97e9c7f0-5e03-44e6-8a2a-b347bf189fdf" 00:09:37.260 ], 00:09:37.260 "product_name": "Malloc disk", 00:09:37.260 "block_size": 4096, 00:09:37.260 "num_blocks": 256, 00:09:37.260 "uuid": "97e9c7f0-5e03-44e6-8a2a-b347bf189fdf", 00:09:37.260 "assigned_rate_limits": { 00:09:37.260 "rw_ios_per_sec": 0, 00:09:37.260 "rw_mbytes_per_sec": 0, 00:09:37.260 "r_mbytes_per_sec": 0, 00:09:37.260 "w_mbytes_per_sec": 0 00:09:37.260 }, 00:09:37.260 "claimed": false, 00:09:37.260 "zoned": false, 00:09:37.260 "supported_io_types": { 00:09:37.260 "read": true, 00:09:37.260 "write": true, 00:09:37.260 "unmap": true, 00:09:37.260 "flush": true, 00:09:37.260 "reset": true, 00:09:37.260 "nvme_admin": false, 00:09:37.260 "nvme_io": false, 00:09:37.260 "nvme_io_md": false, 00:09:37.260 "write_zeroes": true, 00:09:37.260 "zcopy": true, 00:09:37.261 "get_zone_info": false, 00:09:37.261 "zone_management": false, 00:09:37.261 "zone_append": false, 00:09:37.261 "compare": false, 00:09:37.261 "compare_and_write": false, 00:09:37.261 "abort": true, 00:09:37.261 "seek_hole": false, 00:09:37.261 "seek_data": false, 00:09:37.261 "copy": true, 00:09:37.261 "nvme_iov_md": false 00:09:37.261 }, 00:09:37.261 "memory_domains": [ 00:09:37.261 { 00:09:37.261 "dma_device_id": "system", 00:09:37.261 "dma_device_type": 1 00:09:37.261 }, 00:09:37.261 { 00:09:37.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.261 "dma_device_type": 2 00:09:37.261 } 00:09:37.261 ], 00:09:37.261 "driver_specific": {} 00:09:37.261 } 00:09:37.261 ]' 00:09:37.261 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:37.261 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:37.261 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:37.261 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.261 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.261 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.261 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:37.261 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.261 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.261 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.261 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:37.261 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:37.261 18:05:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:37.261 00:09:37.261 real 0m0.111s 00:09:37.261 user 0m0.073s 00:09:37.261 sys 0m0.010s 00:09:37.261 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.261 18:05:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.261 ************************************ 00:09:37.261 END TEST rpc_plugins 00:09:37.261 ************************************ 00:09:37.261 18:05:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:37.261 18:05:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.261 18:05:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.261 18:05:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.261 ************************************ 00:09:37.261 START TEST rpc_trace_cmd_test 00:09:37.261 ************************************ 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:37.261 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid482671", 00:09:37.261 "tpoint_group_mask": "0x8", 00:09:37.261 "iscsi_conn": { 00:09:37.261 "mask": "0x2", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "scsi": { 00:09:37.261 "mask": "0x4", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "bdev": { 00:09:37.261 "mask": "0x8", 00:09:37.261 "tpoint_mask": "0xffffffffffffffff" 00:09:37.261 }, 00:09:37.261 "nvmf_rdma": { 00:09:37.261 "mask": "0x10", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "nvmf_tcp": { 00:09:37.261 "mask": "0x20", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "ftl": { 00:09:37.261 "mask": "0x40", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "blobfs": { 00:09:37.261 "mask": "0x80", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "dsa": { 00:09:37.261 "mask": "0x200", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "thread": { 00:09:37.261 "mask": "0x400", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "nvme_pcie": { 00:09:37.261 "mask": "0x800", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "iaa": { 00:09:37.261 "mask": "0x1000", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "nvme_tcp": { 00:09:37.261 "mask": "0x2000", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "bdev_nvme": { 00:09:37.261 "mask": "0x4000", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "sock": { 00:09:37.261 "mask": "0x8000", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "blob": { 00:09:37.261 "mask": "0x10000", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "bdev_raid": { 00:09:37.261 "mask": "0x20000", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 }, 00:09:37.261 "scheduler": { 00:09:37.261 "mask": "0x40000", 00:09:37.261 "tpoint_mask": "0x0" 00:09:37.261 } 00:09:37.261 }' 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:37.261 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:37.522 00:09:37.522 real 0m0.201s 00:09:37.522 user 0m0.174s 00:09:37.522 sys 0m0.018s 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.522 18:05:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.522 ************************************ 00:09:37.522 END TEST rpc_trace_cmd_test 00:09:37.522 ************************************ 00:09:37.522 18:05:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:37.522 18:05:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:37.522 18:05:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:37.522 18:05:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.522 18:05:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.522 18:05:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.522 ************************************ 00:09:37.522 START TEST rpc_daemon_integrity 00:09:37.522 ************************************ 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:37.522 { 00:09:37.522 "name": "Malloc2", 00:09:37.522 "aliases": [ 00:09:37.522 "0e5171a5-5528-4546-bb45-93ba0964aec4" 00:09:37.522 ], 00:09:37.522 "product_name": "Malloc disk", 00:09:37.522 "block_size": 512, 00:09:37.522 "num_blocks": 16384, 00:09:37.522 "uuid": "0e5171a5-5528-4546-bb45-93ba0964aec4", 00:09:37.522 "assigned_rate_limits": { 00:09:37.522 "rw_ios_per_sec": 0, 00:09:37.522 "rw_mbytes_per_sec": 0, 00:09:37.522 "r_mbytes_per_sec": 0, 00:09:37.522 "w_mbytes_per_sec": 0 00:09:37.522 }, 00:09:37.522 "claimed": false, 00:09:37.522 "zoned": false, 00:09:37.522 "supported_io_types": { 00:09:37.522 "read": true, 00:09:37.522 "write": true, 00:09:37.522 "unmap": true, 00:09:37.522 "flush": true, 00:09:37.522 "reset": true, 00:09:37.522 "nvme_admin": false, 00:09:37.522 "nvme_io": false, 00:09:37.522 "nvme_io_md": false, 00:09:37.522 "write_zeroes": true, 00:09:37.522 "zcopy": true, 00:09:37.522 "get_zone_info": false, 00:09:37.522 "zone_management": false, 00:09:37.522 "zone_append": false, 00:09:37.522 "compare": false, 00:09:37.522 "compare_and_write": false, 00:09:37.522 "abort": true, 00:09:37.522 "seek_hole": false, 00:09:37.522 "seek_data": false, 00:09:37.522 "copy": true, 00:09:37.522 "nvme_iov_md": false 00:09:37.522 }, 00:09:37.522 "memory_domains": [ 00:09:37.522 { 00:09:37.522 "dma_device_id": "system", 00:09:37.522 "dma_device_type": 1 00:09:37.522 }, 00:09:37.522 { 00:09:37.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.522 "dma_device_type": 2 00:09:37.522 } 00:09:37.522 ], 00:09:37.522 "driver_specific": {} 00:09:37.522 } 00:09:37.522 ]' 00:09:37.522 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 [2024-11-26 18:05:25.538404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:37.783 [2024-11-26 18:05:25.538447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.783 [2024-11-26 18:05:25.538476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d9afc0 00:09:37.783 [2024-11-26 18:05:25.538491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.783 [2024-11-26 18:05:25.539710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.783 [2024-11-26 18:05:25.539738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:37.783 Passthru0 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:37.783 { 00:09:37.783 "name": "Malloc2", 00:09:37.783 "aliases": [ 00:09:37.783 "0e5171a5-5528-4546-bb45-93ba0964aec4" 00:09:37.783 ], 00:09:37.783 "product_name": "Malloc disk", 00:09:37.783 "block_size": 512, 00:09:37.783 "num_blocks": 16384, 00:09:37.783 "uuid": "0e5171a5-5528-4546-bb45-93ba0964aec4", 00:09:37.783 "assigned_rate_limits": { 00:09:37.783 "rw_ios_per_sec": 0, 00:09:37.783 "rw_mbytes_per_sec": 0, 00:09:37.783 "r_mbytes_per_sec": 0, 00:09:37.783 "w_mbytes_per_sec": 0 00:09:37.783 }, 00:09:37.783 "claimed": true, 00:09:37.783 "claim_type": "exclusive_write", 00:09:37.783 "zoned": false, 00:09:37.783 "supported_io_types": { 00:09:37.783 "read": true, 00:09:37.783 "write": true, 00:09:37.783 "unmap": true, 00:09:37.783 "flush": true, 00:09:37.783 "reset": true, 00:09:37.783 "nvme_admin": false, 00:09:37.783 "nvme_io": false, 00:09:37.783 "nvme_io_md": false, 00:09:37.783 "write_zeroes": true, 00:09:37.783 "zcopy": true, 00:09:37.783 "get_zone_info": false, 00:09:37.783 "zone_management": false, 00:09:37.783 "zone_append": false, 00:09:37.783 "compare": false, 00:09:37.783 "compare_and_write": false, 00:09:37.783 "abort": true, 00:09:37.783 "seek_hole": false, 00:09:37.783 "seek_data": false, 00:09:37.783 "copy": true, 00:09:37.783 "nvme_iov_md": false 00:09:37.783 }, 00:09:37.783 "memory_domains": [ 00:09:37.783 { 00:09:37.783 "dma_device_id": "system", 00:09:37.783 "dma_device_type": 1 00:09:37.783 }, 00:09:37.783 { 00:09:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.783 "dma_device_type": 2 00:09:37.783 } 00:09:37.783 ], 00:09:37.783 "driver_specific": {} 00:09:37.783 }, 00:09:37.783 { 00:09:37.783 "name": "Passthru0", 00:09:37.783 "aliases": [ 00:09:37.783 "8c4514b6-73b8-5c0e-8aef-f291d3f7a9e9" 00:09:37.783 ], 00:09:37.783 "product_name": "passthru", 00:09:37.783 "block_size": 512, 00:09:37.783 "num_blocks": 16384, 00:09:37.783 "uuid": "8c4514b6-73b8-5c0e-8aef-f291d3f7a9e9", 00:09:37.783 "assigned_rate_limits": { 00:09:37.783 "rw_ios_per_sec": 0, 00:09:37.783 "rw_mbytes_per_sec": 0, 00:09:37.783 "r_mbytes_per_sec": 0, 00:09:37.783 "w_mbytes_per_sec": 0 00:09:37.783 }, 00:09:37.783 "claimed": false, 00:09:37.783 "zoned": false, 00:09:37.783 "supported_io_types": { 00:09:37.783 "read": true, 00:09:37.783 "write": true, 00:09:37.783 "unmap": true, 00:09:37.783 "flush": true, 00:09:37.783 "reset": true, 00:09:37.783 "nvme_admin": false, 00:09:37.783 "nvme_io": false, 00:09:37.783 "nvme_io_md": false, 00:09:37.783 "write_zeroes": true, 00:09:37.783 "zcopy": true, 00:09:37.783 "get_zone_info": false, 00:09:37.783 "zone_management": false, 00:09:37.783 "zone_append": false, 00:09:37.783 "compare": false, 00:09:37.783 "compare_and_write": false, 00:09:37.783 "abort": true, 00:09:37.783 "seek_hole": false, 00:09:37.783 "seek_data": false, 00:09:37.783 "copy": true, 00:09:37.783 "nvme_iov_md": false 00:09:37.783 }, 00:09:37.783 "memory_domains": [ 00:09:37.783 { 00:09:37.783 "dma_device_id": "system", 00:09:37.783 "dma_device_type": 1 00:09:37.783 }, 00:09:37.783 { 00:09:37.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.783 "dma_device_type": 2 00:09:37.783 } 00:09:37.783 ], 00:09:37.783 "driver_specific": { 00:09:37.783 "passthru": { 00:09:37.783 "name": "Passthru0", 00:09:37.783 "base_bdev_name": "Malloc2" 00:09:37.783 } 00:09:37.783 } 00:09:37.783 } 00:09:37.783 ]' 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:37.783 00:09:37.783 real 0m0.212s 00:09:37.783 user 0m0.139s 00:09:37.783 sys 0m0.017s 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.783 18:05:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.783 ************************************ 00:09:37.783 END TEST rpc_daemon_integrity 00:09:37.783 ************************************ 00:09:37.783 18:05:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:37.783 18:05:25 rpc -- rpc/rpc.sh@84 -- # killprocess 482671 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@954 -- # '[' -z 482671 ']' 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@958 -- # kill -0 482671 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@959 -- # uname 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482671 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482671' 00:09:37.783 killing process with pid 482671 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@973 -- # kill 482671 00:09:37.783 18:05:25 rpc -- common/autotest_common.sh@978 -- # wait 482671 00:09:38.353 00:09:38.353 real 0m1.996s 00:09:38.353 user 0m2.464s 00:09:38.353 sys 0m0.624s 00:09:38.353 18:05:26 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.353 18:05:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.353 ************************************ 00:09:38.353 END TEST rpc 00:09:38.353 ************************************ 00:09:38.353 18:05:26 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:38.353 18:05:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.353 18:05:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.353 18:05:26 -- common/autotest_common.sh@10 -- # set +x 00:09:38.353 ************************************ 00:09:38.353 START TEST skip_rpc 00:09:38.353 ************************************ 00:09:38.353 18:05:26 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:38.353 * Looking for test storage... 00:09:38.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:38.353 18:05:26 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.353 18:05:26 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.353 18:05:26 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.353 18:05:26 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.353 18:05:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:38.353 18:05:26 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.353 18:05:26 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.353 --rc genhtml_branch_coverage=1 00:09:38.353 --rc genhtml_function_coverage=1 00:09:38.353 --rc genhtml_legend=1 00:09:38.353 --rc geninfo_all_blocks=1 00:09:38.353 --rc geninfo_unexecuted_blocks=1 00:09:38.353 00:09:38.353 ' 00:09:38.353 18:05:26 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.353 --rc genhtml_branch_coverage=1 00:09:38.353 --rc genhtml_function_coverage=1 00:09:38.353 --rc genhtml_legend=1 00:09:38.353 --rc geninfo_all_blocks=1 00:09:38.354 --rc geninfo_unexecuted_blocks=1 00:09:38.354 00:09:38.354 ' 00:09:38.354 18:05:26 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.354 --rc genhtml_branch_coverage=1 00:09:38.354 --rc genhtml_function_coverage=1 00:09:38.354 --rc genhtml_legend=1 00:09:38.354 --rc geninfo_all_blocks=1 00:09:38.354 --rc geninfo_unexecuted_blocks=1 00:09:38.354 00:09:38.354 ' 00:09:38.354 18:05:26 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.354 --rc genhtml_branch_coverage=1 00:09:38.354 --rc genhtml_function_coverage=1 00:09:38.354 --rc genhtml_legend=1 00:09:38.354 --rc geninfo_all_blocks=1 00:09:38.354 --rc geninfo_unexecuted_blocks=1 00:09:38.354 00:09:38.354 ' 00:09:38.354 18:05:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:38.354 18:05:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:38.354 18:05:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:38.354 18:05:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.354 18:05:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.354 18:05:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.614 ************************************ 00:09:38.614 START TEST skip_rpc 00:09:38.614 ************************************ 00:09:38.614 18:05:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:38.614 18:05:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=483018 00:09:38.614 18:05:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:38.614 18:05:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:38.614 18:05:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:38.614 [2024-11-26 18:05:26.425245] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:09:38.614 [2024-11-26 18:05:26.425340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483018 ] 00:09:38.614 [2024-11-26 18:05:26.494298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.614 [2024-11-26 18:05:26.553163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 483018 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 483018 ']' 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 483018 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483018 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483018' 00:09:43.895 killing process with pid 483018 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 483018 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 483018 00:09:43.895 00:09:43.895 real 0m5.457s 00:09:43.895 user 0m5.143s 00:09:43.895 sys 0m0.323s 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.895 18:05:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 ************************************ 00:09:43.895 END TEST skip_rpc 00:09:43.895 ************************************ 00:09:43.895 18:05:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:43.895 18:05:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.895 18:05:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.895 18:05:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.895 ************************************ 00:09:43.895 START TEST skip_rpc_with_json 00:09:43.895 ************************************ 00:09:43.895 18:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:43.895 18:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:43.895 18:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=483704 00:09:43.895 18:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:43.895 18:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:43.895 18:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 483704 00:09:43.895 18:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 483704 ']' 00:09:43.896 18:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.896 18:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.896 18:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.896 18:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.896 18:05:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.155 [2024-11-26 18:05:31.934365] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:09:44.155 [2024-11-26 18:05:31.934462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483704 ] 00:09:44.155 [2024-11-26 18:05:31.999730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.155 [2024-11-26 18:05:32.063749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.415 [2024-11-26 18:05:32.346621] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:44.415 request: 00:09:44.415 { 00:09:44.415 "trtype": "tcp", 00:09:44.415 "method": "nvmf_get_transports", 00:09:44.415 "req_id": 1 00:09:44.415 } 00:09:44.415 Got JSON-RPC error response 00:09:44.415 response: 00:09:44.415 { 00:09:44.415 "code": -19, 00:09:44.415 "message": "No such device" 00:09:44.415 } 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.415 [2024-11-26 18:05:32.354728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.415 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.676 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.676 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:44.676 { 00:09:44.676 "subsystems": [ 00:09:44.676 { 00:09:44.676 "subsystem": "fsdev", 00:09:44.676 "config": [ 00:09:44.676 { 00:09:44.676 "method": "fsdev_set_opts", 00:09:44.676 "params": { 00:09:44.676 "fsdev_io_pool_size": 65535, 00:09:44.676 "fsdev_io_cache_size": 256 00:09:44.676 } 00:09:44.676 } 00:09:44.676 ] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "vfio_user_target", 00:09:44.676 "config": null 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "keyring", 00:09:44.676 "config": [] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "iobuf", 00:09:44.676 "config": [ 00:09:44.676 { 00:09:44.676 "method": "iobuf_set_options", 00:09:44.676 "params": { 00:09:44.676 "small_pool_count": 8192, 00:09:44.676 "large_pool_count": 1024, 00:09:44.676 "small_bufsize": 8192, 00:09:44.676 "large_bufsize": 135168, 00:09:44.676 "enable_numa": false 00:09:44.676 } 00:09:44.676 } 00:09:44.676 ] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "sock", 00:09:44.676 "config": [ 00:09:44.676 { 00:09:44.676 "method": "sock_set_default_impl", 00:09:44.676 "params": { 00:09:44.676 "impl_name": "posix" 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "sock_impl_set_options", 00:09:44.676 "params": { 00:09:44.676 "impl_name": "ssl", 00:09:44.676 "recv_buf_size": 4096, 00:09:44.676 "send_buf_size": 4096, 00:09:44.676 "enable_recv_pipe": true, 00:09:44.676 "enable_quickack": false, 00:09:44.676 "enable_placement_id": 0, 00:09:44.676 "enable_zerocopy_send_server": true, 00:09:44.676 "enable_zerocopy_send_client": false, 00:09:44.676 "zerocopy_threshold": 0, 00:09:44.676 "tls_version": 0, 00:09:44.676 "enable_ktls": false 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "sock_impl_set_options", 00:09:44.676 "params": { 00:09:44.676 "impl_name": "posix", 00:09:44.676 "recv_buf_size": 2097152, 00:09:44.676 "send_buf_size": 2097152, 00:09:44.676 "enable_recv_pipe": true, 00:09:44.676 "enable_quickack": false, 00:09:44.676 "enable_placement_id": 0, 00:09:44.676 "enable_zerocopy_send_server": true, 00:09:44.676 "enable_zerocopy_send_client": false, 00:09:44.676 "zerocopy_threshold": 0, 00:09:44.676 "tls_version": 0, 00:09:44.676 "enable_ktls": false 00:09:44.676 } 00:09:44.676 } 00:09:44.676 ] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "vmd", 00:09:44.676 "config": [] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "accel", 00:09:44.676 "config": [ 00:09:44.676 { 00:09:44.676 "method": "accel_set_options", 00:09:44.676 "params": { 00:09:44.676 "small_cache_size": 128, 00:09:44.676 "large_cache_size": 16, 00:09:44.676 "task_count": 2048, 00:09:44.676 "sequence_count": 2048, 00:09:44.676 "buf_count": 2048 00:09:44.676 } 00:09:44.676 } 00:09:44.676 ] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "bdev", 00:09:44.676 "config": [ 00:09:44.676 { 00:09:44.676 "method": "bdev_set_options", 00:09:44.676 "params": { 00:09:44.676 "bdev_io_pool_size": 65535, 00:09:44.676 "bdev_io_cache_size": 256, 00:09:44.676 "bdev_auto_examine": true, 00:09:44.676 "iobuf_small_cache_size": 128, 00:09:44.676 "iobuf_large_cache_size": 16 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "bdev_raid_set_options", 00:09:44.676 "params": { 00:09:44.676 "process_window_size_kb": 1024, 00:09:44.676 "process_max_bandwidth_mb_sec": 0 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "bdev_iscsi_set_options", 00:09:44.676 "params": { 00:09:44.676 "timeout_sec": 30 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "bdev_nvme_set_options", 00:09:44.676 "params": { 00:09:44.676 "action_on_timeout": "none", 00:09:44.676 "timeout_us": 0, 00:09:44.676 "timeout_admin_us": 0, 00:09:44.676 "keep_alive_timeout_ms": 10000, 00:09:44.676 "arbitration_burst": 0, 00:09:44.676 "low_priority_weight": 0, 00:09:44.676 "medium_priority_weight": 0, 00:09:44.676 "high_priority_weight": 0, 00:09:44.676 "nvme_adminq_poll_period_us": 10000, 00:09:44.676 "nvme_ioq_poll_period_us": 0, 00:09:44.676 "io_queue_requests": 0, 00:09:44.676 "delay_cmd_submit": true, 00:09:44.676 "transport_retry_count": 4, 00:09:44.676 "bdev_retry_count": 3, 00:09:44.676 "transport_ack_timeout": 0, 00:09:44.676 "ctrlr_loss_timeout_sec": 0, 00:09:44.676 "reconnect_delay_sec": 0, 00:09:44.676 "fast_io_fail_timeout_sec": 0, 00:09:44.676 "disable_auto_failback": false, 00:09:44.676 "generate_uuids": false, 00:09:44.676 "transport_tos": 0, 00:09:44.676 "nvme_error_stat": false, 00:09:44.676 "rdma_srq_size": 0, 00:09:44.676 "io_path_stat": false, 00:09:44.676 "allow_accel_sequence": false, 00:09:44.676 "rdma_max_cq_size": 0, 00:09:44.676 "rdma_cm_event_timeout_ms": 0, 00:09:44.676 "dhchap_digests": [ 00:09:44.676 "sha256", 00:09:44.676 "sha384", 00:09:44.676 "sha512" 00:09:44.676 ], 00:09:44.676 "dhchap_dhgroups": [ 00:09:44.676 "null", 00:09:44.676 "ffdhe2048", 00:09:44.676 "ffdhe3072", 00:09:44.676 "ffdhe4096", 00:09:44.676 "ffdhe6144", 00:09:44.676 "ffdhe8192" 00:09:44.676 ] 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "bdev_nvme_set_hotplug", 00:09:44.676 "params": { 00:09:44.676 "period_us": 100000, 00:09:44.676 "enable": false 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "bdev_wait_for_examine" 00:09:44.676 } 00:09:44.676 ] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "scsi", 00:09:44.676 "config": null 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "scheduler", 00:09:44.676 "config": [ 00:09:44.676 { 00:09:44.676 "method": "framework_set_scheduler", 00:09:44.676 "params": { 00:09:44.676 "name": "static" 00:09:44.676 } 00:09:44.676 } 00:09:44.676 ] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "vhost_scsi", 00:09:44.676 "config": [] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "vhost_blk", 00:09:44.676 "config": [] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "ublk", 00:09:44.676 "config": [] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "nbd", 00:09:44.676 "config": [] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "nvmf", 00:09:44.676 "config": [ 00:09:44.676 { 00:09:44.676 "method": "nvmf_set_config", 00:09:44.676 "params": { 00:09:44.676 "discovery_filter": "match_any", 00:09:44.676 "admin_cmd_passthru": { 00:09:44.676 "identify_ctrlr": false 00:09:44.676 }, 00:09:44.676 "dhchap_digests": [ 00:09:44.676 "sha256", 00:09:44.676 "sha384", 00:09:44.676 "sha512" 00:09:44.676 ], 00:09:44.676 "dhchap_dhgroups": [ 00:09:44.676 "null", 00:09:44.676 "ffdhe2048", 00:09:44.676 "ffdhe3072", 00:09:44.676 "ffdhe4096", 00:09:44.676 "ffdhe6144", 00:09:44.676 "ffdhe8192" 00:09:44.676 ] 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "nvmf_set_max_subsystems", 00:09:44.676 "params": { 00:09:44.676 "max_subsystems": 1024 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "nvmf_set_crdt", 00:09:44.676 "params": { 00:09:44.676 "crdt1": 0, 00:09:44.676 "crdt2": 0, 00:09:44.676 "crdt3": 0 00:09:44.676 } 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "method": "nvmf_create_transport", 00:09:44.676 "params": { 00:09:44.676 "trtype": "TCP", 00:09:44.676 "max_queue_depth": 128, 00:09:44.676 "max_io_qpairs_per_ctrlr": 127, 00:09:44.676 "in_capsule_data_size": 4096, 00:09:44.676 "max_io_size": 131072, 00:09:44.676 "io_unit_size": 131072, 00:09:44.676 "max_aq_depth": 128, 00:09:44.676 "num_shared_buffers": 511, 00:09:44.676 "buf_cache_size": 4294967295, 00:09:44.676 "dif_insert_or_strip": false, 00:09:44.676 "zcopy": false, 00:09:44.676 "c2h_success": true, 00:09:44.676 "sock_priority": 0, 00:09:44.676 "abort_timeout_sec": 1, 00:09:44.676 "ack_timeout": 0, 00:09:44.676 "data_wr_pool_size": 0 00:09:44.676 } 00:09:44.676 } 00:09:44.676 ] 00:09:44.676 }, 00:09:44.676 { 00:09:44.676 "subsystem": "iscsi", 00:09:44.676 "config": [ 00:09:44.676 { 00:09:44.676 "method": "iscsi_set_options", 00:09:44.676 "params": { 00:09:44.676 "node_base": "iqn.2016-06.io.spdk", 00:09:44.676 "max_sessions": 128, 00:09:44.676 "max_connections_per_session": 2, 00:09:44.676 "max_queue_depth": 64, 00:09:44.677 "default_time2wait": 2, 00:09:44.677 "default_time2retain": 20, 00:09:44.677 "first_burst_length": 8192, 00:09:44.677 "immediate_data": true, 00:09:44.677 "allow_duplicated_isid": false, 00:09:44.677 "error_recovery_level": 0, 00:09:44.677 "nop_timeout": 60, 00:09:44.677 "nop_in_interval": 30, 00:09:44.677 "disable_chap": false, 00:09:44.677 "require_chap": false, 00:09:44.677 "mutual_chap": false, 00:09:44.677 "chap_group": 0, 00:09:44.677 "max_large_datain_per_connection": 64, 00:09:44.677 "max_r2t_per_connection": 4, 00:09:44.677 "pdu_pool_size": 36864, 00:09:44.677 "immediate_data_pool_size": 16384, 00:09:44.677 "data_out_pool_size": 2048 00:09:44.677 } 00:09:44.677 } 00:09:44.677 ] 00:09:44.677 } 00:09:44.677 ] 00:09:44.677 } 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 483704 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 483704 ']' 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 483704 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483704 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483704' 00:09:44.677 killing process with pid 483704 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 483704 00:09:44.677 18:05:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 483704 00:09:45.245 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=483844 00:09:45.245 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:45.245 18:05:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:50.525 18:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 483844 00:09:50.525 18:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 483844 ']' 00:09:50.525 18:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 483844 00:09:50.525 18:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:50.525 18:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.525 18:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 483844 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 483844' 00:09:50.525 killing process with pid 483844 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 483844 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 483844 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:50.525 00:09:50.525 real 0m6.564s 00:09:50.525 user 0m6.218s 00:09:50.525 sys 0m0.670s 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:50.525 ************************************ 00:09:50.525 END TEST skip_rpc_with_json 00:09:50.525 ************************************ 00:09:50.525 18:05:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:50.525 18:05:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.525 18:05:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.525 18:05:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.525 ************************************ 00:09:50.525 START TEST skip_rpc_with_delay 00:09:50.525 ************************************ 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:50.525 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:50.803 [2024-11-26 18:05:38.544876] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:50.803 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:50.803 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:50.803 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:50.803 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:50.803 00:09:50.803 real 0m0.074s 00:09:50.803 user 0m0.043s 00:09:50.803 sys 0m0.030s 00:09:50.803 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.804 18:05:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:50.804 ************************************ 00:09:50.804 END TEST skip_rpc_with_delay 00:09:50.804 ************************************ 00:09:50.804 18:05:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:50.804 18:05:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:50.804 18:05:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:50.804 18:05:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.804 18:05:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.804 18:05:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.804 ************************************ 00:09:50.804 START TEST exit_on_failed_rpc_init 00:09:50.804 ************************************ 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=484562 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 484562 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 484562 ']' 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.804 18:05:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:50.804 [2024-11-26 18:05:38.663507] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:09:50.804 [2024-11-26 18:05:38.663601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484562 ] 00:09:50.804 [2024-11-26 18:05:38.730191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.804 [2024-11-26 18:05:38.791547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:51.062 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:51.322 [2024-11-26 18:05:39.122979] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:09:51.322 [2024-11-26 18:05:39.123059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484687 ] 00:09:51.322 [2024-11-26 18:05:39.189536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.322 [2024-11-26 18:05:39.249156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.322 [2024-11-26 18:05:39.249271] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:51.322 [2024-11-26 18:05:39.249319] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:51.322 [2024-11-26 18:05:39.249349] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 484562 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 484562 ']' 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 484562 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.322 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 484562 00:09:51.582 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.582 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.582 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 484562' 00:09:51.582 killing process with pid 484562 00:09:51.582 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 484562 00:09:51.582 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 484562 00:09:51.840 00:09:51.840 real 0m1.188s 00:09:51.840 user 0m1.301s 00:09:51.840 sys 0m0.440s 00:09:51.840 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.840 18:05:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:51.840 ************************************ 00:09:51.840 END TEST exit_on_failed_rpc_init 00:09:51.840 ************************************ 00:09:51.840 18:05:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:51.840 00:09:51.840 real 0m13.632s 00:09:51.840 user 0m12.879s 00:09:51.840 sys 0m1.660s 00:09:51.840 18:05:39 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.840 18:05:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.840 ************************************ 00:09:51.840 END TEST skip_rpc 00:09:51.840 ************************************ 00:09:52.098 18:05:39 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:52.098 18:05:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.098 18:05:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.098 18:05:39 -- common/autotest_common.sh@10 -- # set +x 00:09:52.098 ************************************ 00:09:52.098 START TEST rpc_client 00:09:52.098 ************************************ 00:09:52.098 18:05:39 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:52.098 * Looking for test storage... 00:09:52.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:09:52.098 18:05:39 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.098 18:05:39 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.098 18:05:39 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.098 18:05:40 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.098 18:05:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:52.099 18:05:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.099 18:05:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.099 18:05:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.099 18:05:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:52.099 18:05:40 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.099 18:05:40 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.099 --rc genhtml_branch_coverage=1 00:09:52.099 --rc genhtml_function_coverage=1 00:09:52.099 --rc genhtml_legend=1 00:09:52.099 --rc geninfo_all_blocks=1 00:09:52.099 --rc geninfo_unexecuted_blocks=1 00:09:52.099 00:09:52.099 ' 00:09:52.099 18:05:40 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.099 --rc genhtml_branch_coverage=1 00:09:52.099 --rc genhtml_function_coverage=1 00:09:52.099 --rc genhtml_legend=1 00:09:52.099 --rc geninfo_all_blocks=1 00:09:52.099 --rc geninfo_unexecuted_blocks=1 00:09:52.099 00:09:52.099 ' 00:09:52.099 18:05:40 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.099 --rc genhtml_branch_coverage=1 00:09:52.099 --rc genhtml_function_coverage=1 00:09:52.099 --rc genhtml_legend=1 00:09:52.099 --rc geninfo_all_blocks=1 00:09:52.099 --rc geninfo_unexecuted_blocks=1 00:09:52.099 00:09:52.099 ' 00:09:52.099 18:05:40 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.099 --rc genhtml_branch_coverage=1 00:09:52.099 --rc genhtml_function_coverage=1 00:09:52.099 --rc genhtml_legend=1 00:09:52.099 --rc geninfo_all_blocks=1 00:09:52.099 --rc geninfo_unexecuted_blocks=1 00:09:52.099 00:09:52.099 ' 00:09:52.099 18:05:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:09:52.099 OK 00:09:52.099 18:05:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:52.099 00:09:52.099 real 0m0.170s 00:09:52.099 user 0m0.109s 00:09:52.099 sys 0m0.070s 00:09:52.099 18:05:40 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.099 18:05:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:52.099 ************************************ 00:09:52.099 END TEST rpc_client 00:09:52.099 ************************************ 00:09:52.099 18:05:40 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:52.099 18:05:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.099 18:05:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.099 18:05:40 -- common/autotest_common.sh@10 -- # set +x 00:09:52.099 ************************************ 00:09:52.099 START TEST json_config 00:09:52.099 ************************************ 00:09:52.099 18:05:40 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.358 18:05:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.358 18:05:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.358 18:05:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.358 18:05:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.358 18:05:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.358 18:05:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.358 18:05:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.358 18:05:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.358 18:05:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.358 18:05:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.358 18:05:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.358 18:05:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:52.358 18:05:40 json_config -- scripts/common.sh@345 -- # : 1 00:09:52.358 18:05:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.358 18:05:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.358 18:05:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:52.358 18:05:40 json_config -- scripts/common.sh@353 -- # local d=1 00:09:52.358 18:05:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.358 18:05:40 json_config -- scripts/common.sh@355 -- # echo 1 00:09:52.358 18:05:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.358 18:05:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:52.358 18:05:40 json_config -- scripts/common.sh@353 -- # local d=2 00:09:52.358 18:05:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.358 18:05:40 json_config -- scripts/common.sh@355 -- # echo 2 00:09:52.358 18:05:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.358 18:05:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.358 18:05:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.358 18:05:40 json_config -- scripts/common.sh@368 -- # return 0 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.358 --rc genhtml_branch_coverage=1 00:09:52.358 --rc genhtml_function_coverage=1 00:09:52.358 --rc genhtml_legend=1 00:09:52.358 --rc geninfo_all_blocks=1 00:09:52.358 --rc geninfo_unexecuted_blocks=1 00:09:52.358 00:09:52.358 ' 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.358 --rc genhtml_branch_coverage=1 00:09:52.358 --rc genhtml_function_coverage=1 00:09:52.358 --rc genhtml_legend=1 00:09:52.358 --rc geninfo_all_blocks=1 00:09:52.358 --rc geninfo_unexecuted_blocks=1 00:09:52.358 00:09:52.358 ' 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.358 --rc genhtml_branch_coverage=1 00:09:52.358 --rc genhtml_function_coverage=1 00:09:52.358 --rc genhtml_legend=1 00:09:52.358 --rc geninfo_all_blocks=1 00:09:52.358 --rc geninfo_unexecuted_blocks=1 00:09:52.358 00:09:52.358 ' 00:09:52.358 18:05:40 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.358 --rc genhtml_branch_coverage=1 00:09:52.358 --rc genhtml_function_coverage=1 00:09:52.358 --rc genhtml_legend=1 00:09:52.358 --rc geninfo_all_blocks=1 00:09:52.358 --rc geninfo_unexecuted_blocks=1 00:09:52.358 00:09:52.358 ' 00:09:52.358 18:05:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.358 18:05:40 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.359 18:05:40 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.359 18:05:40 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.359 18:05:40 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.359 18:05:40 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.359 18:05:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.359 18:05:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.359 18:05:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.359 18:05:40 json_config -- paths/export.sh@5 -- # export PATH 00:09:52.359 18:05:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@51 -- # : 0 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.359 18:05:40 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:52.359 INFO: JSON configuration test init 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:52.359 18:05:40 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:52.359 18:05:40 json_config -- json_config/common.sh@9 -- # local app=target 00:09:52.359 18:05:40 json_config -- json_config/common.sh@10 -- # shift 00:09:52.359 18:05:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:52.359 18:05:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:52.359 18:05:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:52.359 18:05:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:52.359 18:05:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:52.359 18:05:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=484953 00:09:52.359 18:05:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:52.359 18:05:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:52.359 Waiting for target to run... 00:09:52.359 18:05:40 json_config -- json_config/common.sh@25 -- # waitforlisten 484953 /var/tmp/spdk_tgt.sock 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@835 -- # '[' -z 484953 ']' 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:52.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.359 18:05:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:52.359 [2024-11-26 18:05:40.316159] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:09:52.359 [2024-11-26 18:05:40.316251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484953 ] 00:09:52.929 [2024-11-26 18:05:40.703039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.929 [2024-11-26 18:05:40.747681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.497 18:05:41 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.497 18:05:41 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:53.497 18:05:41 json_config -- json_config/common.sh@26 -- # echo '' 00:09:53.497 00:09:53.497 18:05:41 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:53.497 18:05:41 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:53.497 18:05:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.497 18:05:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.497 18:05:41 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:53.497 18:05:41 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:53.497 18:05:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:53.497 18:05:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.497 18:05:41 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:53.497 18:05:41 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:53.497 18:05:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:56.780 18:05:44 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:56.780 18:05:44 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:56.780 18:05:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.780 18:05:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:56.780 18:05:44 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:56.780 18:05:44 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:56.780 18:05:44 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:56.780 18:05:44 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:56.780 18:05:44 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:56.781 18:05:44 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:56.781 18:05:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:56.781 18:05:44 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@54 -- # sort 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:57.037 18:05:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.037 18:05:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:57.037 18:05:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.037 18:05:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:57.037 18:05:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:57.037 18:05:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:57.294 MallocForNvmf0 00:09:57.294 18:05:45 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:57.294 18:05:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:57.552 MallocForNvmf1 00:09:57.552 18:05:45 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:57.552 18:05:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:57.808 [2024-11-26 18:05:45.614612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.808 18:05:45 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.808 18:05:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:58.066 18:05:45 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:58.066 18:05:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:58.322 18:05:46 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:58.322 18:05:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:58.579 18:05:46 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:58.579 18:05:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:58.931 [2024-11-26 18:05:46.702077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:58.931 18:05:46 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:58.931 18:05:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.931 18:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:58.931 18:05:46 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:58.931 18:05:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.931 18:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:58.931 18:05:46 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:58.931 18:05:46 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:58.931 18:05:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:59.188 MallocBdevForConfigChangeCheck 00:09:59.188 18:05:47 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:59.188 18:05:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.188 18:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.188 18:05:47 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:59.188 18:05:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:59.444 18:05:47 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:59.444 INFO: shutting down applications... 00:09:59.444 18:05:47 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:59.444 18:05:47 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:59.444 18:05:47 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:59.700 18:05:47 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:01.594 Calling clear_iscsi_subsystem 00:10:01.594 Calling clear_nvmf_subsystem 00:10:01.594 Calling clear_nbd_subsystem 00:10:01.594 Calling clear_ublk_subsystem 00:10:01.594 Calling clear_vhost_blk_subsystem 00:10:01.594 Calling clear_vhost_scsi_subsystem 00:10:01.594 Calling clear_bdev_subsystem 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@350 -- # count=100 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@352 -- # break 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:10:01.594 18:05:49 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:10:01.594 18:05:49 json_config -- json_config/common.sh@31 -- # local app=target 00:10:01.594 18:05:49 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:01.594 18:05:49 json_config -- json_config/common.sh@35 -- # [[ -n 484953 ]] 00:10:01.594 18:05:49 json_config -- json_config/common.sh@38 -- # kill -SIGINT 484953 00:10:01.594 18:05:49 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:01.594 18:05:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:01.594 18:05:49 json_config -- json_config/common.sh@41 -- # kill -0 484953 00:10:01.594 18:05:49 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:02.161 18:05:50 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:02.161 18:05:50 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:02.161 18:05:50 json_config -- json_config/common.sh@41 -- # kill -0 484953 00:10:02.161 18:05:50 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:02.161 18:05:50 json_config -- json_config/common.sh@43 -- # break 00:10:02.161 18:05:50 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:02.161 18:05:50 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:02.161 SPDK target shutdown done 00:10:02.161 18:05:50 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:10:02.161 INFO: relaunching applications... 00:10:02.161 18:05:50 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:02.161 18:05:50 json_config -- json_config/common.sh@9 -- # local app=target 00:10:02.161 18:05:50 json_config -- json_config/common.sh@10 -- # shift 00:10:02.161 18:05:50 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:02.161 18:05:50 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:02.161 18:05:50 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:02.161 18:05:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:02.161 18:05:50 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:02.161 18:05:50 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=486163 00:10:02.161 18:05:50 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:02.161 18:05:50 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:02.161 Waiting for target to run... 00:10:02.161 18:05:50 json_config -- json_config/common.sh@25 -- # waitforlisten 486163 /var/tmp/spdk_tgt.sock 00:10:02.161 18:05:50 json_config -- common/autotest_common.sh@835 -- # '[' -z 486163 ']' 00:10:02.161 18:05:50 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:02.161 18:05:50 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.161 18:05:50 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:02.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:02.161 18:05:50 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.161 18:05:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:02.161 [2024-11-26 18:05:50.091964] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:02.161 [2024-11-26 18:05:50.092055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486163 ] 00:10:02.801 [2024-11-26 18:05:50.465157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.801 [2024-11-26 18:05:50.509207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.084 [2024-11-26 18:05:53.556327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.084 [2024-11-26 18:05:53.588780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:06.084 18:05:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.084 18:05:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:10:06.084 18:05:53 json_config -- json_config/common.sh@26 -- # echo '' 00:10:06.084 00:10:06.084 18:05:53 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:10:06.084 18:05:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:06.084 INFO: Checking if target configuration is the same... 00:10:06.084 18:05:53 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:06.084 18:05:53 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:10:06.084 18:05:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:06.084 + '[' 2 -ne 2 ']' 00:10:06.084 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:06.084 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:06.084 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:06.084 +++ basename /dev/fd/62 00:10:06.084 ++ mktemp /tmp/62.XXX 00:10:06.084 + tmp_file_1=/tmp/62.f6E 00:10:06.084 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:06.084 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:06.084 + tmp_file_2=/tmp/spdk_tgt_config.json.K2W 00:10:06.084 + ret=0 00:10:06.084 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:06.084 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:06.084 + diff -u /tmp/62.f6E /tmp/spdk_tgt_config.json.K2W 00:10:06.084 + echo 'INFO: JSON config files are the same' 00:10:06.084 INFO: JSON config files are the same 00:10:06.084 + rm /tmp/62.f6E /tmp/spdk_tgt_config.json.K2W 00:10:06.084 + exit 0 00:10:06.084 18:05:54 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:10:06.084 18:05:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:06.084 INFO: changing configuration and checking if this can be detected... 00:10:06.084 18:05:54 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:06.084 18:05:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:06.342 18:05:54 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:06.342 18:05:54 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:10:06.342 18:05:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:06.600 + '[' 2 -ne 2 ']' 00:10:06.600 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:10:06.600 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:10:06.600 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:06.600 +++ basename /dev/fd/62 00:10:06.600 ++ mktemp /tmp/62.XXX 00:10:06.600 + tmp_file_1=/tmp/62.elZ 00:10:06.600 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:06.600 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:06.600 + tmp_file_2=/tmp/spdk_tgt_config.json.Uvd 00:10:06.600 + ret=0 00:10:06.600 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:06.858 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:10:06.858 + diff -u /tmp/62.elZ /tmp/spdk_tgt_config.json.Uvd 00:10:06.858 + ret=1 00:10:06.858 + echo '=== Start of file: /tmp/62.elZ ===' 00:10:06.858 + cat /tmp/62.elZ 00:10:06.858 + echo '=== End of file: /tmp/62.elZ ===' 00:10:06.858 + echo '' 00:10:06.858 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Uvd ===' 00:10:06.858 + cat /tmp/spdk_tgt_config.json.Uvd 00:10:06.858 + echo '=== End of file: /tmp/spdk_tgt_config.json.Uvd ===' 00:10:06.858 + echo '' 00:10:06.858 + rm /tmp/62.elZ /tmp/spdk_tgt_config.json.Uvd 00:10:06.858 + exit 1 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:10:06.858 INFO: configuration change detected. 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@324 -- # [[ -n 486163 ]] 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@200 -- # uname -s 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:06.858 18:05:54 json_config -- json_config/json_config.sh@330 -- # killprocess 486163 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@954 -- # '[' -z 486163 ']' 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@958 -- # kill -0 486163 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@959 -- # uname 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.858 18:05:54 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 486163 00:10:07.115 18:05:54 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.115 18:05:54 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.115 18:05:54 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 486163' 00:10:07.115 killing process with pid 486163 00:10:07.115 18:05:54 json_config -- common/autotest_common.sh@973 -- # kill 486163 00:10:07.115 18:05:54 json_config -- common/autotest_common.sh@978 -- # wait 486163 00:10:08.487 18:05:56 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:10:08.487 18:05:56 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:10:08.487 18:05:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.487 18:05:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 18:05:56 json_config -- json_config/json_config.sh@335 -- # return 0 00:10:08.487 18:05:56 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:10:08.487 INFO: Success 00:10:08.487 00:10:08.487 real 0m16.357s 00:10:08.487 user 0m18.188s 00:10:08.487 sys 0m2.458s 00:10:08.487 18:05:56 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.487 18:05:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:08.487 ************************************ 00:10:08.487 END TEST json_config 00:10:08.487 ************************************ 00:10:08.487 18:05:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:08.487 18:05:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.487 18:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.487 18:05:56 -- common/autotest_common.sh@10 -- # set +x 00:10:08.746 ************************************ 00:10:08.746 START TEST json_config_extra_key 00:10:08.746 ************************************ 00:10:08.746 18:05:56 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:10:08.746 18:05:56 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.746 18:05:56 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.746 18:05:56 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.746 18:05:56 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.746 18:05:56 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.746 18:05:56 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.746 18:05:56 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.746 18:05:56 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.746 18:05:56 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.746 18:05:56 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.746 18:05:56 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:08.747 18:05:56 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.747 18:05:56 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.747 --rc genhtml_branch_coverage=1 00:10:08.747 --rc genhtml_function_coverage=1 00:10:08.747 --rc genhtml_legend=1 00:10:08.747 --rc geninfo_all_blocks=1 00:10:08.747 --rc geninfo_unexecuted_blocks=1 00:10:08.747 00:10:08.747 ' 00:10:08.747 18:05:56 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.747 --rc genhtml_branch_coverage=1 00:10:08.747 --rc genhtml_function_coverage=1 00:10:08.747 --rc genhtml_legend=1 00:10:08.747 --rc geninfo_all_blocks=1 00:10:08.747 --rc geninfo_unexecuted_blocks=1 00:10:08.747 00:10:08.747 ' 00:10:08.747 18:05:56 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.747 --rc genhtml_branch_coverage=1 00:10:08.747 --rc genhtml_function_coverage=1 00:10:08.747 --rc genhtml_legend=1 00:10:08.747 --rc geninfo_all_blocks=1 00:10:08.747 --rc geninfo_unexecuted_blocks=1 00:10:08.747 00:10:08.747 ' 00:10:08.747 18:05:56 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.747 --rc genhtml_branch_coverage=1 00:10:08.747 --rc genhtml_function_coverage=1 00:10:08.747 --rc genhtml_legend=1 00:10:08.747 --rc geninfo_all_blocks=1 00:10:08.747 --rc geninfo_unexecuted_blocks=1 00:10:08.747 00:10:08.747 ' 00:10:08.747 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.747 18:05:56 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.747 18:05:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.747 18:05:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.747 18:05:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.747 18:05:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:08.747 18:05:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.747 18:05:56 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.747 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:08.748 INFO: launching applications... 00:10:08.748 18:05:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=487077 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:08.748 Waiting for target to run... 00:10:08.748 18:05:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 487077 /var/tmp/spdk_tgt.sock 00:10:08.748 18:05:56 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 487077 ']' 00:10:08.748 18:05:56 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:08.748 18:05:56 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.748 18:05:56 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:08.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:08.748 18:05:56 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.748 18:05:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:08.748 [2024-11-26 18:05:56.693988] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:08.748 [2024-11-26 18:05:56.694069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487077 ] 00:10:09.313 [2024-11-26 18:05:57.047398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.313 [2024-11-26 18:05:57.088284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.879 18:05:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.879 18:05:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:09.879 00:10:09.879 18:05:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:09.879 INFO: shutting down applications... 00:10:09.879 18:05:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 487077 ]] 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 487077 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 487077 00:10:09.879 18:05:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:10.461 18:05:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:10.461 18:05:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:10.461 18:05:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 487077 00:10:10.461 18:05:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:10.461 18:05:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:10.461 18:05:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:10.461 18:05:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:10.461 SPDK target shutdown done 00:10:10.461 18:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:10.461 Success 00:10:10.461 00:10:10.461 real 0m1.688s 00:10:10.461 user 0m1.677s 00:10:10.461 sys 0m0.476s 00:10:10.461 18:05:58 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.461 18:05:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 ************************************ 00:10:10.461 END TEST json_config_extra_key 00:10:10.461 ************************************ 00:10:10.461 18:05:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:10.461 18:05:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.461 18:05:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.461 18:05:58 -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 ************************************ 00:10:10.461 START TEST alias_rpc 00:10:10.461 ************************************ 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:10.461 * Looking for test storage... 00:10:10.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.461 18:05:58 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.461 --rc genhtml_branch_coverage=1 00:10:10.461 --rc genhtml_function_coverage=1 00:10:10.461 --rc genhtml_legend=1 00:10:10.461 --rc geninfo_all_blocks=1 00:10:10.461 --rc geninfo_unexecuted_blocks=1 00:10:10.461 00:10:10.461 ' 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.461 --rc genhtml_branch_coverage=1 00:10:10.461 --rc genhtml_function_coverage=1 00:10:10.461 --rc genhtml_legend=1 00:10:10.461 --rc geninfo_all_blocks=1 00:10:10.461 --rc geninfo_unexecuted_blocks=1 00:10:10.461 00:10:10.461 ' 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.461 --rc genhtml_branch_coverage=1 00:10:10.461 --rc genhtml_function_coverage=1 00:10:10.461 --rc genhtml_legend=1 00:10:10.461 --rc geninfo_all_blocks=1 00:10:10.461 --rc geninfo_unexecuted_blocks=1 00:10:10.461 00:10:10.461 ' 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.461 --rc genhtml_branch_coverage=1 00:10:10.461 --rc genhtml_function_coverage=1 00:10:10.461 --rc genhtml_legend=1 00:10:10.461 --rc geninfo_all_blocks=1 00:10:10.461 --rc geninfo_unexecuted_blocks=1 00:10:10.461 00:10:10.461 ' 00:10:10.461 18:05:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:10.461 18:05:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=487387 00:10:10.461 18:05:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:10.461 18:05:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 487387 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 487387 ']' 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.461 18:05:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.461 [2024-11-26 18:05:58.440660] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:10.461 [2024-11-26 18:05:58.440752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487387 ] 00:10:10.719 [2024-11-26 18:05:58.508575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.719 [2024-11-26 18:05:58.567451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.976 18:05:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.976 18:05:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:10.976 18:05:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:10:11.233 18:05:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 487387 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 487387 ']' 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 487387 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487387 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487387' 00:10:11.233 killing process with pid 487387 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@973 -- # kill 487387 00:10:11.233 18:05:59 alias_rpc -- common/autotest_common.sh@978 -- # wait 487387 00:10:11.798 00:10:11.798 real 0m1.358s 00:10:11.798 user 0m1.464s 00:10:11.798 sys 0m0.457s 00:10:11.798 18:05:59 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.798 18:05:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.798 ************************************ 00:10:11.798 END TEST alias_rpc 00:10:11.798 ************************************ 00:10:11.798 18:05:59 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:11.798 18:05:59 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:11.798 18:05:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.798 18:05:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.798 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:10:11.798 ************************************ 00:10:11.798 START TEST spdkcli_tcp 00:10:11.798 ************************************ 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:10:11.798 * Looking for test storage... 00:10:11.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.798 18:05:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.798 --rc genhtml_branch_coverage=1 00:10:11.798 --rc genhtml_function_coverage=1 00:10:11.798 --rc genhtml_legend=1 00:10:11.798 --rc geninfo_all_blocks=1 00:10:11.798 --rc geninfo_unexecuted_blocks=1 00:10:11.798 00:10:11.798 ' 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.798 --rc genhtml_branch_coverage=1 00:10:11.798 --rc genhtml_function_coverage=1 00:10:11.798 --rc genhtml_legend=1 00:10:11.798 --rc geninfo_all_blocks=1 00:10:11.798 --rc geninfo_unexecuted_blocks=1 00:10:11.798 00:10:11.798 ' 00:10:11.798 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.799 --rc genhtml_branch_coverage=1 00:10:11.799 --rc genhtml_function_coverage=1 00:10:11.799 --rc genhtml_legend=1 00:10:11.799 --rc geninfo_all_blocks=1 00:10:11.799 --rc geninfo_unexecuted_blocks=1 00:10:11.799 00:10:11.799 ' 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.799 --rc genhtml_branch_coverage=1 00:10:11.799 --rc genhtml_function_coverage=1 00:10:11.799 --rc genhtml_legend=1 00:10:11.799 --rc geninfo_all_blocks=1 00:10:11.799 --rc geninfo_unexecuted_blocks=1 00:10:11.799 00:10:11.799 ' 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=487591 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:11.799 18:05:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 487591 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 487591 ']' 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.799 18:05:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.058 [2024-11-26 18:05:59.843792] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:12.058 [2024-11-26 18:05:59.843883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487591 ] 00:10:12.058 [2024-11-26 18:05:59.910240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:12.058 [2024-11-26 18:05:59.970929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.058 [2024-11-26 18:05:59.970935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.316 18:06:00 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.316 18:06:00 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:10:12.316 18:06:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=487700 00:10:12.316 18:06:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:12.316 18:06:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:12.574 [ 00:10:12.574 "bdev_malloc_delete", 00:10:12.574 "bdev_malloc_create", 00:10:12.574 "bdev_null_resize", 00:10:12.574 "bdev_null_delete", 00:10:12.574 "bdev_null_create", 00:10:12.574 "bdev_nvme_cuse_unregister", 00:10:12.574 "bdev_nvme_cuse_register", 00:10:12.574 "bdev_opal_new_user", 00:10:12.574 "bdev_opal_set_lock_state", 00:10:12.574 "bdev_opal_delete", 00:10:12.574 "bdev_opal_get_info", 00:10:12.574 "bdev_opal_create", 00:10:12.574 "bdev_nvme_opal_revert", 00:10:12.574 "bdev_nvme_opal_init", 00:10:12.574 "bdev_nvme_send_cmd", 00:10:12.574 "bdev_nvme_set_keys", 00:10:12.574 "bdev_nvme_get_path_iostat", 00:10:12.574 "bdev_nvme_get_mdns_discovery_info", 00:10:12.574 "bdev_nvme_stop_mdns_discovery", 00:10:12.574 "bdev_nvme_start_mdns_discovery", 00:10:12.574 "bdev_nvme_set_multipath_policy", 00:10:12.574 "bdev_nvme_set_preferred_path", 00:10:12.574 "bdev_nvme_get_io_paths", 00:10:12.574 "bdev_nvme_remove_error_injection", 00:10:12.574 "bdev_nvme_add_error_injection", 00:10:12.574 "bdev_nvme_get_discovery_info", 00:10:12.574 "bdev_nvme_stop_discovery", 00:10:12.574 "bdev_nvme_start_discovery", 00:10:12.574 "bdev_nvme_get_controller_health_info", 00:10:12.574 "bdev_nvme_disable_controller", 00:10:12.574 "bdev_nvme_enable_controller", 00:10:12.574 "bdev_nvme_reset_controller", 00:10:12.574 "bdev_nvme_get_transport_statistics", 00:10:12.574 "bdev_nvme_apply_firmware", 00:10:12.574 "bdev_nvme_detach_controller", 00:10:12.574 "bdev_nvme_get_controllers", 00:10:12.574 "bdev_nvme_attach_controller", 00:10:12.574 "bdev_nvme_set_hotplug", 00:10:12.574 "bdev_nvme_set_options", 00:10:12.574 "bdev_passthru_delete", 00:10:12.574 "bdev_passthru_create", 00:10:12.574 "bdev_lvol_set_parent_bdev", 00:10:12.574 "bdev_lvol_set_parent", 00:10:12.574 "bdev_lvol_check_shallow_copy", 00:10:12.574 "bdev_lvol_start_shallow_copy", 00:10:12.574 "bdev_lvol_grow_lvstore", 00:10:12.574 "bdev_lvol_get_lvols", 00:10:12.574 "bdev_lvol_get_lvstores", 00:10:12.574 "bdev_lvol_delete", 00:10:12.574 "bdev_lvol_set_read_only", 00:10:12.574 "bdev_lvol_resize", 00:10:12.574 "bdev_lvol_decouple_parent", 00:10:12.574 "bdev_lvol_inflate", 00:10:12.574 "bdev_lvol_rename", 00:10:12.574 "bdev_lvol_clone_bdev", 00:10:12.574 "bdev_lvol_clone", 00:10:12.574 "bdev_lvol_snapshot", 00:10:12.574 "bdev_lvol_create", 00:10:12.574 "bdev_lvol_delete_lvstore", 00:10:12.574 "bdev_lvol_rename_lvstore", 00:10:12.574 "bdev_lvol_create_lvstore", 00:10:12.574 "bdev_raid_set_options", 00:10:12.574 "bdev_raid_remove_base_bdev", 00:10:12.574 "bdev_raid_add_base_bdev", 00:10:12.574 "bdev_raid_delete", 00:10:12.574 "bdev_raid_create", 00:10:12.574 "bdev_raid_get_bdevs", 00:10:12.574 "bdev_error_inject_error", 00:10:12.574 "bdev_error_delete", 00:10:12.574 "bdev_error_create", 00:10:12.574 "bdev_split_delete", 00:10:12.574 "bdev_split_create", 00:10:12.574 "bdev_delay_delete", 00:10:12.574 "bdev_delay_create", 00:10:12.574 "bdev_delay_update_latency", 00:10:12.574 "bdev_zone_block_delete", 00:10:12.574 "bdev_zone_block_create", 00:10:12.574 "blobfs_create", 00:10:12.574 "blobfs_detect", 00:10:12.574 "blobfs_set_cache_size", 00:10:12.574 "bdev_aio_delete", 00:10:12.574 "bdev_aio_rescan", 00:10:12.574 "bdev_aio_create", 00:10:12.574 "bdev_ftl_set_property", 00:10:12.574 "bdev_ftl_get_properties", 00:10:12.574 "bdev_ftl_get_stats", 00:10:12.574 "bdev_ftl_unmap", 00:10:12.574 "bdev_ftl_unload", 00:10:12.574 "bdev_ftl_delete", 00:10:12.574 "bdev_ftl_load", 00:10:12.574 "bdev_ftl_create", 00:10:12.574 "bdev_virtio_attach_controller", 00:10:12.574 "bdev_virtio_scsi_get_devices", 00:10:12.574 "bdev_virtio_detach_controller", 00:10:12.574 "bdev_virtio_blk_set_hotplug", 00:10:12.574 "bdev_iscsi_delete", 00:10:12.574 "bdev_iscsi_create", 00:10:12.574 "bdev_iscsi_set_options", 00:10:12.574 "accel_error_inject_error", 00:10:12.574 "ioat_scan_accel_module", 00:10:12.574 "dsa_scan_accel_module", 00:10:12.574 "iaa_scan_accel_module", 00:10:12.574 "vfu_virtio_create_fs_endpoint", 00:10:12.574 "vfu_virtio_create_scsi_endpoint", 00:10:12.574 "vfu_virtio_scsi_remove_target", 00:10:12.574 "vfu_virtio_scsi_add_target", 00:10:12.574 "vfu_virtio_create_blk_endpoint", 00:10:12.574 "vfu_virtio_delete_endpoint", 00:10:12.574 "keyring_file_remove_key", 00:10:12.574 "keyring_file_add_key", 00:10:12.574 "keyring_linux_set_options", 00:10:12.574 "fsdev_aio_delete", 00:10:12.575 "fsdev_aio_create", 00:10:12.575 "iscsi_get_histogram", 00:10:12.575 "iscsi_enable_histogram", 00:10:12.575 "iscsi_set_options", 00:10:12.575 "iscsi_get_auth_groups", 00:10:12.575 "iscsi_auth_group_remove_secret", 00:10:12.575 "iscsi_auth_group_add_secret", 00:10:12.575 "iscsi_delete_auth_group", 00:10:12.575 "iscsi_create_auth_group", 00:10:12.575 "iscsi_set_discovery_auth", 00:10:12.575 "iscsi_get_options", 00:10:12.575 "iscsi_target_node_request_logout", 00:10:12.575 "iscsi_target_node_set_redirect", 00:10:12.575 "iscsi_target_node_set_auth", 00:10:12.575 "iscsi_target_node_add_lun", 00:10:12.575 "iscsi_get_stats", 00:10:12.575 "iscsi_get_connections", 00:10:12.575 "iscsi_portal_group_set_auth", 00:10:12.575 "iscsi_start_portal_group", 00:10:12.575 "iscsi_delete_portal_group", 00:10:12.575 "iscsi_create_portal_group", 00:10:12.575 "iscsi_get_portal_groups", 00:10:12.575 "iscsi_delete_target_node", 00:10:12.575 "iscsi_target_node_remove_pg_ig_maps", 00:10:12.575 "iscsi_target_node_add_pg_ig_maps", 00:10:12.575 "iscsi_create_target_node", 00:10:12.575 "iscsi_get_target_nodes", 00:10:12.575 "iscsi_delete_initiator_group", 00:10:12.575 "iscsi_initiator_group_remove_initiators", 00:10:12.575 "iscsi_initiator_group_add_initiators", 00:10:12.575 "iscsi_create_initiator_group", 00:10:12.575 "iscsi_get_initiator_groups", 00:10:12.575 "nvmf_set_crdt", 00:10:12.575 "nvmf_set_config", 00:10:12.575 "nvmf_set_max_subsystems", 00:10:12.575 "nvmf_stop_mdns_prr", 00:10:12.575 "nvmf_publish_mdns_prr", 00:10:12.575 "nvmf_subsystem_get_listeners", 00:10:12.575 "nvmf_subsystem_get_qpairs", 00:10:12.575 "nvmf_subsystem_get_controllers", 00:10:12.575 "nvmf_get_stats", 00:10:12.575 "nvmf_get_transports", 00:10:12.575 "nvmf_create_transport", 00:10:12.575 "nvmf_get_targets", 00:10:12.575 "nvmf_delete_target", 00:10:12.575 "nvmf_create_target", 00:10:12.575 "nvmf_subsystem_allow_any_host", 00:10:12.575 "nvmf_subsystem_set_keys", 00:10:12.575 "nvmf_subsystem_remove_host", 00:10:12.575 "nvmf_subsystem_add_host", 00:10:12.575 "nvmf_ns_remove_host", 00:10:12.575 "nvmf_ns_add_host", 00:10:12.575 "nvmf_subsystem_remove_ns", 00:10:12.575 "nvmf_subsystem_set_ns_ana_group", 00:10:12.575 "nvmf_subsystem_add_ns", 00:10:12.575 "nvmf_subsystem_listener_set_ana_state", 00:10:12.575 "nvmf_discovery_get_referrals", 00:10:12.575 "nvmf_discovery_remove_referral", 00:10:12.575 "nvmf_discovery_add_referral", 00:10:12.575 "nvmf_subsystem_remove_listener", 00:10:12.575 "nvmf_subsystem_add_listener", 00:10:12.575 "nvmf_delete_subsystem", 00:10:12.575 "nvmf_create_subsystem", 00:10:12.575 "nvmf_get_subsystems", 00:10:12.575 "env_dpdk_get_mem_stats", 00:10:12.575 "nbd_get_disks", 00:10:12.575 "nbd_stop_disk", 00:10:12.575 "nbd_start_disk", 00:10:12.575 "ublk_recover_disk", 00:10:12.575 "ublk_get_disks", 00:10:12.575 "ublk_stop_disk", 00:10:12.575 "ublk_start_disk", 00:10:12.575 "ublk_destroy_target", 00:10:12.575 "ublk_create_target", 00:10:12.575 "virtio_blk_create_transport", 00:10:12.575 "virtio_blk_get_transports", 00:10:12.575 "vhost_controller_set_coalescing", 00:10:12.575 "vhost_get_controllers", 00:10:12.575 "vhost_delete_controller", 00:10:12.575 "vhost_create_blk_controller", 00:10:12.575 "vhost_scsi_controller_remove_target", 00:10:12.575 "vhost_scsi_controller_add_target", 00:10:12.575 "vhost_start_scsi_controller", 00:10:12.575 "vhost_create_scsi_controller", 00:10:12.575 "thread_set_cpumask", 00:10:12.575 "scheduler_set_options", 00:10:12.575 "framework_get_governor", 00:10:12.575 "framework_get_scheduler", 00:10:12.575 "framework_set_scheduler", 00:10:12.575 "framework_get_reactors", 00:10:12.575 "thread_get_io_channels", 00:10:12.575 "thread_get_pollers", 00:10:12.575 "thread_get_stats", 00:10:12.575 "framework_monitor_context_switch", 00:10:12.575 "spdk_kill_instance", 00:10:12.575 "log_enable_timestamps", 00:10:12.575 "log_get_flags", 00:10:12.575 "log_clear_flag", 00:10:12.575 "log_set_flag", 00:10:12.575 "log_get_level", 00:10:12.575 "log_set_level", 00:10:12.575 "log_get_print_level", 00:10:12.575 "log_set_print_level", 00:10:12.575 "framework_enable_cpumask_locks", 00:10:12.575 "framework_disable_cpumask_locks", 00:10:12.575 "framework_wait_init", 00:10:12.575 "framework_start_init", 00:10:12.575 "scsi_get_devices", 00:10:12.575 "bdev_get_histogram", 00:10:12.575 "bdev_enable_histogram", 00:10:12.575 "bdev_set_qos_limit", 00:10:12.575 "bdev_set_qd_sampling_period", 00:10:12.575 "bdev_get_bdevs", 00:10:12.575 "bdev_reset_iostat", 00:10:12.575 "bdev_get_iostat", 00:10:12.575 "bdev_examine", 00:10:12.575 "bdev_wait_for_examine", 00:10:12.575 "bdev_set_options", 00:10:12.575 "accel_get_stats", 00:10:12.575 "accel_set_options", 00:10:12.575 "accel_set_driver", 00:10:12.575 "accel_crypto_key_destroy", 00:10:12.575 "accel_crypto_keys_get", 00:10:12.575 "accel_crypto_key_create", 00:10:12.575 "accel_assign_opc", 00:10:12.575 "accel_get_module_info", 00:10:12.575 "accel_get_opc_assignments", 00:10:12.575 "vmd_rescan", 00:10:12.575 "vmd_remove_device", 00:10:12.575 "vmd_enable", 00:10:12.575 "sock_get_default_impl", 00:10:12.575 "sock_set_default_impl", 00:10:12.575 "sock_impl_set_options", 00:10:12.575 "sock_impl_get_options", 00:10:12.575 "iobuf_get_stats", 00:10:12.575 "iobuf_set_options", 00:10:12.575 "keyring_get_keys", 00:10:12.575 "vfu_tgt_set_base_path", 00:10:12.575 "framework_get_pci_devices", 00:10:12.575 "framework_get_config", 00:10:12.575 "framework_get_subsystems", 00:10:12.575 "fsdev_set_opts", 00:10:12.575 "fsdev_get_opts", 00:10:12.575 "trace_get_info", 00:10:12.575 "trace_get_tpoint_group_mask", 00:10:12.575 "trace_disable_tpoint_group", 00:10:12.575 "trace_enable_tpoint_group", 00:10:12.575 "trace_clear_tpoint_mask", 00:10:12.575 "trace_set_tpoint_mask", 00:10:12.575 "notify_get_notifications", 00:10:12.575 "notify_get_types", 00:10:12.575 "spdk_get_version", 00:10:12.575 "rpc_get_methods" 00:10:12.575 ] 00:10:12.575 18:06:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:12.575 18:06:00 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.575 18:06:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.575 18:06:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:12.575 18:06:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 487591 00:10:12.575 18:06:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 487591 ']' 00:10:12.575 18:06:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 487591 00:10:12.575 18:06:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:10:12.575 18:06:00 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.575 18:06:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487591 00:10:12.832 18:06:00 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.832 18:06:00 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.832 18:06:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487591' 00:10:12.832 killing process with pid 487591 00:10:12.832 18:06:00 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 487591 00:10:12.832 18:06:00 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 487591 00:10:13.091 00:10:13.091 real 0m1.373s 00:10:13.091 user 0m2.472s 00:10:13.091 sys 0m0.470s 00:10:13.091 18:06:01 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.091 18:06:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.091 ************************************ 00:10:13.091 END TEST spdkcli_tcp 00:10:13.091 ************************************ 00:10:13.091 18:06:01 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:13.091 18:06:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:13.091 18:06:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.091 18:06:01 -- common/autotest_common.sh@10 -- # set +x 00:10:13.091 ************************************ 00:10:13.091 START TEST dpdk_mem_utility 00:10:13.091 ************************************ 00:10:13.091 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:13.349 * Looking for test storage... 00:10:13.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:10:13.349 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:13.349 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:10:13.349 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:13.349 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.349 18:06:01 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:13.349 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.349 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:13.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.349 --rc genhtml_branch_coverage=1 00:10:13.349 --rc genhtml_function_coverage=1 00:10:13.349 --rc genhtml_legend=1 00:10:13.349 --rc geninfo_all_blocks=1 00:10:13.349 --rc geninfo_unexecuted_blocks=1 00:10:13.349 00:10:13.349 ' 00:10:13.349 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:13.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.349 --rc genhtml_branch_coverage=1 00:10:13.349 --rc genhtml_function_coverage=1 00:10:13.349 --rc genhtml_legend=1 00:10:13.349 --rc geninfo_all_blocks=1 00:10:13.349 --rc geninfo_unexecuted_blocks=1 00:10:13.349 00:10:13.349 ' 00:10:13.349 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:13.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.349 --rc genhtml_branch_coverage=1 00:10:13.349 --rc genhtml_function_coverage=1 00:10:13.349 --rc genhtml_legend=1 00:10:13.349 --rc geninfo_all_blocks=1 00:10:13.349 --rc geninfo_unexecuted_blocks=1 00:10:13.349 00:10:13.349 ' 00:10:13.350 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:13.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.350 --rc genhtml_branch_coverage=1 00:10:13.350 --rc genhtml_function_coverage=1 00:10:13.350 --rc genhtml_legend=1 00:10:13.350 --rc geninfo_all_blocks=1 00:10:13.350 --rc geninfo_unexecuted_blocks=1 00:10:13.350 00:10:13.350 ' 00:10:13.350 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:13.350 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=487880 00:10:13.350 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:10:13.350 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 487880 00:10:13.350 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 487880 ']' 00:10:13.350 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.350 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.350 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.350 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.350 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:13.350 [2024-11-26 18:06:01.262355] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:13.350 [2024-11-26 18:06:01.262451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487880 ] 00:10:13.350 [2024-11-26 18:06:01.332001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.608 [2024-11-26 18:06:01.396816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.867 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.867 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:10:13.867 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:13.867 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:13.867 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.867 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:13.867 { 00:10:13.867 "filename": "/tmp/spdk_mem_dump.txt" 00:10:13.867 } 00:10:13.867 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.867 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:10:13.867 DPDK memory size 818.000000 MiB in 1 heap(s) 00:10:13.867 1 heaps totaling size 818.000000 MiB 00:10:13.867 size: 818.000000 MiB heap id: 0 00:10:13.867 end heaps---------- 00:10:13.867 9 mempools totaling size 603.782043 MiB 00:10:13.867 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:13.867 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:13.867 size: 100.555481 MiB name: bdev_io_487880 00:10:13.867 size: 50.003479 MiB name: msgpool_487880 00:10:13.867 size: 36.509338 MiB name: fsdev_io_487880 00:10:13.867 size: 21.763794 MiB name: PDU_Pool 00:10:13.867 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:13.867 size: 4.133484 MiB name: evtpool_487880 00:10:13.867 size: 0.026123 MiB name: Session_Pool 00:10:13.867 end mempools------- 00:10:13.867 6 memzones totaling size 4.142822 MiB 00:10:13.867 size: 1.000366 MiB name: RG_ring_0_487880 00:10:13.867 size: 1.000366 MiB name: RG_ring_1_487880 00:10:13.867 size: 1.000366 MiB name: RG_ring_4_487880 00:10:13.867 size: 1.000366 MiB name: RG_ring_5_487880 00:10:13.867 size: 0.125366 MiB name: RG_ring_2_487880 00:10:13.867 size: 0.015991 MiB name: RG_ring_3_487880 00:10:13.867 end memzones------- 00:10:13.867 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:10:13.867 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:10:13.867 list of free elements. size: 10.852478 MiB 00:10:13.867 element at address: 0x200019200000 with size: 0.999878 MiB 00:10:13.867 element at address: 0x200019400000 with size: 0.999878 MiB 00:10:13.867 element at address: 0x200000400000 with size: 0.998535 MiB 00:10:13.867 element at address: 0x200032000000 with size: 0.994446 MiB 00:10:13.867 element at address: 0x200006400000 with size: 0.959839 MiB 00:10:13.867 element at address: 0x200012c00000 with size: 0.944275 MiB 00:10:13.867 element at address: 0x200019600000 with size: 0.936584 MiB 00:10:13.867 element at address: 0x200000200000 with size: 0.717346 MiB 00:10:13.867 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:10:13.867 element at address: 0x200000c00000 with size: 0.495422 MiB 00:10:13.867 element at address: 0x20000a600000 with size: 0.490723 MiB 00:10:13.867 element at address: 0x200019800000 with size: 0.485657 MiB 00:10:13.867 element at address: 0x200003e00000 with size: 0.481934 MiB 00:10:13.867 element at address: 0x200028200000 with size: 0.410034 MiB 00:10:13.867 element at address: 0x200000800000 with size: 0.355042 MiB 00:10:13.867 list of standard malloc elements. size: 199.218628 MiB 00:10:13.867 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:10:13.867 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:10:13.867 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:13.867 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:10:13.867 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:10:13.867 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:13.867 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:10:13.867 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:13.867 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:10:13.867 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20000085b040 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20000085f300 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20000087f680 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200000cff000 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200003efb980 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:10:13.867 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200028268f80 with size: 0.000183 MiB 00:10:13.867 element at address: 0x200028269040 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:10:13.867 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:10:13.867 list of memzone associated elements. size: 607.928894 MiB 00:10:13.867 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:10:13.867 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:13.867 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:10:13.867 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:13.867 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:10:13.867 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_487880_0 00:10:13.867 element at address: 0x200000dff380 with size: 48.003052 MiB 00:10:13.867 associated memzone info: size: 48.002930 MiB name: MP_msgpool_487880_0 00:10:13.867 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:10:13.867 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_487880_0 00:10:13.867 element at address: 0x2000199be940 with size: 20.255554 MiB 00:10:13.867 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:13.867 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:10:13.867 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:13.867 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:10:13.867 associated memzone info: size: 3.000122 MiB name: MP_evtpool_487880_0 00:10:13.867 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:10:13.867 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_487880 00:10:13.867 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:13.867 associated memzone info: size: 1.007996 MiB name: MP_evtpool_487880 00:10:13.867 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:10:13.867 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:13.867 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:10:13.868 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:13.868 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:10:13.868 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:13.868 element at address: 0x200003efba40 with size: 1.008118 MiB 00:10:13.868 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:13.868 element at address: 0x200000cff180 with size: 1.000488 MiB 00:10:13.868 associated memzone info: size: 1.000366 MiB name: RG_ring_0_487880 00:10:13.868 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:10:13.868 associated memzone info: size: 1.000366 MiB name: RG_ring_1_487880 00:10:13.868 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:10:13.868 associated memzone info: size: 1.000366 MiB name: RG_ring_4_487880 00:10:13.868 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:10:13.868 associated memzone info: size: 1.000366 MiB name: RG_ring_5_487880 00:10:13.868 element at address: 0x20000087f740 with size: 0.500488 MiB 00:10:13.868 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_487880 00:10:13.868 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:10:13.868 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_487880 00:10:13.868 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:10:13.868 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:13.868 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:10:13.868 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:13.868 element at address: 0x20001987c540 with size: 0.250488 MiB 00:10:13.868 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:13.868 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:10:13.868 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_487880 00:10:13.868 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:10:13.868 associated memzone info: size: 0.125366 MiB name: RG_ring_2_487880 00:10:13.868 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:10:13.868 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:13.868 element at address: 0x200028269100 with size: 0.023743 MiB 00:10:13.868 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:13.868 element at address: 0x20000085b100 with size: 0.016113 MiB 00:10:13.868 associated memzone info: size: 0.015991 MiB name: RG_ring_3_487880 00:10:13.868 element at address: 0x20002826f240 with size: 0.002441 MiB 00:10:13.868 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:13.868 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:10:13.868 associated memzone info: size: 0.000183 MiB name: MP_msgpool_487880 00:10:13.868 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:10:13.868 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_487880 00:10:13.868 element at address: 0x20000085af00 with size: 0.000305 MiB 00:10:13.868 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_487880 00:10:13.868 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:10:13.868 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:13.868 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:13.868 18:06:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 487880 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 487880 ']' 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 487880 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 487880 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 487880' 00:10:13.868 killing process with pid 487880 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 487880 00:10:13.868 18:06:01 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 487880 00:10:14.434 00:10:14.434 real 0m1.188s 00:10:14.434 user 0m1.141s 00:10:14.434 sys 0m0.466s 00:10:14.434 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.434 18:06:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:14.434 ************************************ 00:10:14.434 END TEST dpdk_mem_utility 00:10:14.434 ************************************ 00:10:14.434 18:06:02 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:10:14.434 18:06:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:14.434 18:06:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.434 18:06:02 -- common/autotest_common.sh@10 -- # set +x 00:10:14.434 ************************************ 00:10:14.434 START TEST event 00:10:14.434 ************************************ 00:10:14.434 18:06:02 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:10:14.434 * Looking for test storage... 00:10:14.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:10:14.434 18:06:02 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.434 18:06:02 event -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.434 18:06:02 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.692 18:06:02 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.692 18:06:02 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.692 18:06:02 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.692 18:06:02 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.692 18:06:02 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.692 18:06:02 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.692 18:06:02 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.692 18:06:02 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.692 18:06:02 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.692 18:06:02 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.692 18:06:02 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.692 18:06:02 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.692 18:06:02 event -- scripts/common.sh@344 -- # case "$op" in 00:10:14.692 18:06:02 event -- scripts/common.sh@345 -- # : 1 00:10:14.692 18:06:02 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.692 18:06:02 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.692 18:06:02 event -- scripts/common.sh@365 -- # decimal 1 00:10:14.692 18:06:02 event -- scripts/common.sh@353 -- # local d=1 00:10:14.692 18:06:02 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.692 18:06:02 event -- scripts/common.sh@355 -- # echo 1 00:10:14.692 18:06:02 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.692 18:06:02 event -- scripts/common.sh@366 -- # decimal 2 00:10:14.692 18:06:02 event -- scripts/common.sh@353 -- # local d=2 00:10:14.692 18:06:02 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.692 18:06:02 event -- scripts/common.sh@355 -- # echo 2 00:10:14.692 18:06:02 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.692 18:06:02 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.692 18:06:02 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.692 18:06:02 event -- scripts/common.sh@368 -- # return 0 00:10:14.692 18:06:02 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.692 18:06:02 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.692 --rc genhtml_branch_coverage=1 00:10:14.692 --rc genhtml_function_coverage=1 00:10:14.692 --rc genhtml_legend=1 00:10:14.692 --rc geninfo_all_blocks=1 00:10:14.692 --rc geninfo_unexecuted_blocks=1 00:10:14.692 00:10:14.692 ' 00:10:14.692 18:06:02 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.692 --rc genhtml_branch_coverage=1 00:10:14.692 --rc genhtml_function_coverage=1 00:10:14.692 --rc genhtml_legend=1 00:10:14.692 --rc geninfo_all_blocks=1 00:10:14.692 --rc geninfo_unexecuted_blocks=1 00:10:14.692 00:10:14.692 ' 00:10:14.692 18:06:02 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.692 --rc genhtml_branch_coverage=1 00:10:14.692 --rc genhtml_function_coverage=1 00:10:14.692 --rc genhtml_legend=1 00:10:14.692 --rc geninfo_all_blocks=1 00:10:14.692 --rc geninfo_unexecuted_blocks=1 00:10:14.692 00:10:14.692 ' 00:10:14.692 18:06:02 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.692 --rc genhtml_branch_coverage=1 00:10:14.692 --rc genhtml_function_coverage=1 00:10:14.692 --rc genhtml_legend=1 00:10:14.692 --rc geninfo_all_blocks=1 00:10:14.692 --rc geninfo_unexecuted_blocks=1 00:10:14.692 00:10:14.692 ' 00:10:14.692 18:06:02 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:14.692 18:06:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:14.692 18:06:02 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:14.692 18:06:02 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:14.692 18:06:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.692 18:06:02 event -- common/autotest_common.sh@10 -- # set +x 00:10:14.692 ************************************ 00:10:14.692 START TEST event_perf 00:10:14.692 ************************************ 00:10:14.693 18:06:02 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:14.693 Running I/O for 1 seconds...[2024-11-26 18:06:02.495972] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:14.693 [2024-11-26 18:06:02.496049] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488199 ] 00:10:14.693 [2024-11-26 18:06:02.563411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.693 [2024-11-26 18:06:02.624954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.693 [2024-11-26 18:06:02.625061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.693 [2024-11-26 18:06:02.625157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.693 [2024-11-26 18:06:02.625161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.066 Running I/O for 1 seconds... 00:10:16.066 lcore 0: 222388 00:10:16.066 lcore 1: 222387 00:10:16.066 lcore 2: 222386 00:10:16.066 lcore 3: 222386 00:10:16.066 done. 00:10:16.066 00:10:16.066 real 0m1.212s 00:10:16.066 user 0m4.135s 00:10:16.066 sys 0m0.073s 00:10:16.066 18:06:03 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.066 18:06:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:16.066 ************************************ 00:10:16.066 END TEST event_perf 00:10:16.066 ************************************ 00:10:16.066 18:06:03 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:16.066 18:06:03 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:16.066 18:06:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.066 18:06:03 event -- common/autotest_common.sh@10 -- # set +x 00:10:16.066 ************************************ 00:10:16.066 START TEST event_reactor 00:10:16.066 ************************************ 00:10:16.066 18:06:03 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:10:16.066 [2024-11-26 18:06:03.751099] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:16.066 [2024-11-26 18:06:03.751159] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488391 ] 00:10:16.066 [2024-11-26 18:06:03.817551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.066 [2024-11-26 18:06:03.875325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.000 test_start 00:10:17.000 oneshot 00:10:17.000 tick 100 00:10:17.000 tick 100 00:10:17.000 tick 250 00:10:17.000 tick 100 00:10:17.000 tick 100 00:10:17.000 tick 100 00:10:17.000 tick 250 00:10:17.000 tick 500 00:10:17.000 tick 100 00:10:17.000 tick 100 00:10:17.000 tick 250 00:10:17.000 tick 100 00:10:17.000 tick 100 00:10:17.000 test_end 00:10:17.000 00:10:17.000 real 0m1.201s 00:10:17.000 user 0m1.130s 00:10:17.000 sys 0m0.066s 00:10:17.000 18:06:04 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.000 18:06:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:17.000 ************************************ 00:10:17.000 END TEST event_reactor 00:10:17.000 ************************************ 00:10:17.000 18:06:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:17.000 18:06:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.000 18:06:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.000 18:06:04 event -- common/autotest_common.sh@10 -- # set +x 00:10:17.000 ************************************ 00:10:17.000 START TEST event_reactor_perf 00:10:17.000 ************************************ 00:10:17.000 18:06:04 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:17.000 [2024-11-26 18:06:05.004588] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:17.000 [2024-11-26 18:06:05.004652] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488545 ] 00:10:17.258 [2024-11-26 18:06:05.070695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.258 [2024-11-26 18:06:05.130073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.191 test_start 00:10:18.191 test_end 00:10:18.191 Performance: 422619 events per second 00:10:18.191 00:10:18.191 real 0m1.204s 00:10:18.191 user 0m1.134s 00:10:18.191 sys 0m0.066s 00:10:18.191 18:06:06 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.191 18:06:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:18.191 ************************************ 00:10:18.191 END TEST event_reactor_perf 00:10:18.191 ************************************ 00:10:18.451 18:06:06 event -- event/event.sh@49 -- # uname -s 00:10:18.451 18:06:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:18.451 18:06:06 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:18.451 18:06:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.451 18:06:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.451 18:06:06 event -- common/autotest_common.sh@10 -- # set +x 00:10:18.451 ************************************ 00:10:18.451 START TEST event_scheduler 00:10:18.451 ************************************ 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:10:18.451 * Looking for test storage... 00:10:18.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.451 18:06:06 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.451 --rc genhtml_branch_coverage=1 00:10:18.451 --rc genhtml_function_coverage=1 00:10:18.451 --rc genhtml_legend=1 00:10:18.451 --rc geninfo_all_blocks=1 00:10:18.451 --rc geninfo_unexecuted_blocks=1 00:10:18.451 00:10:18.451 ' 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.451 --rc genhtml_branch_coverage=1 00:10:18.451 --rc genhtml_function_coverage=1 00:10:18.451 --rc genhtml_legend=1 00:10:18.451 --rc geninfo_all_blocks=1 00:10:18.451 --rc geninfo_unexecuted_blocks=1 00:10:18.451 00:10:18.451 ' 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.451 --rc genhtml_branch_coverage=1 00:10:18.451 --rc genhtml_function_coverage=1 00:10:18.451 --rc genhtml_legend=1 00:10:18.451 --rc geninfo_all_blocks=1 00:10:18.451 --rc geninfo_unexecuted_blocks=1 00:10:18.451 00:10:18.451 ' 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.451 --rc genhtml_branch_coverage=1 00:10:18.451 --rc genhtml_function_coverage=1 00:10:18.451 --rc genhtml_legend=1 00:10:18.451 --rc geninfo_all_blocks=1 00:10:18.451 --rc geninfo_unexecuted_blocks=1 00:10:18.451 00:10:18.451 ' 00:10:18.451 18:06:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:18.451 18:06:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=488731 00:10:18.451 18:06:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:18.451 18:06:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:18.451 18:06:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 488731 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 488731 ']' 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.451 18:06:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:18.451 [2024-11-26 18:06:06.450966] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:18.451 [2024-11-26 18:06:06.451042] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488731 ] 00:10:18.709 [2024-11-26 18:06:06.525319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.709 [2024-11-26 18:06:06.591180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.709 [2024-11-26 18:06:06.591207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.709 [2024-11-26 18:06:06.591272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.709 [2024-11-26 18:06:06.591276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:10:18.967 18:06:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 [2024-11-26 18:06:06.756437] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:10:18.967 [2024-11-26 18:06:06.756464] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:18.967 [2024-11-26 18:06:06.756482] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:18.967 [2024-11-26 18:06:06.756493] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:18.967 [2024-11-26 18:06:06.756503] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.967 18:06:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 [2024-11-26 18:06:06.862469] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.967 18:06:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.967 18:06:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 ************************************ 00:10:18.967 START TEST scheduler_create_thread 00:10:18.967 ************************************ 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 2 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 3 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 4 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 5 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.967 6 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.967 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.968 7 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.968 8 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.968 9 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.968 10 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.968 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.225 18:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.482 18:06:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.482 00:10:19.482 real 0m0.592s 00:10:19.482 user 0m0.008s 00:10:19.482 sys 0m0.005s 00:10:19.482 18:06:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.482 18:06:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.482 ************************************ 00:10:19.482 END TEST scheduler_create_thread 00:10:19.482 ************************************ 00:10:19.739 18:06:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:19.739 18:06:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 488731 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 488731 ']' 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 488731 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 488731 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 488731' 00:10:19.739 killing process with pid 488731 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 488731 00:10:19.739 18:06:07 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 488731 00:10:19.999 [2024-11-26 18:06:07.958604] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:20.257 00:10:20.257 real 0m1.922s 00:10:20.257 user 0m2.804s 00:10:20.257 sys 0m0.360s 00:10:20.257 18:06:08 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.257 18:06:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:20.257 ************************************ 00:10:20.257 END TEST event_scheduler 00:10:20.257 ************************************ 00:10:20.257 18:06:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:20.257 18:06:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:20.257 18:06:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.257 18:06:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.257 18:06:08 event -- common/autotest_common.sh@10 -- # set +x 00:10:20.257 ************************************ 00:10:20.257 START TEST app_repeat 00:10:20.257 ************************************ 00:10:20.257 18:06:08 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=489295 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 489295' 00:10:20.257 Process app_repeat pid: 489295 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:20.257 spdk_app_start Round 0 00:10:20.257 18:06:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 489295 /var/tmp/spdk-nbd.sock 00:10:20.257 18:06:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 489295 ']' 00:10:20.257 18:06:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:20.257 18:06:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.257 18:06:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:20.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:20.257 18:06:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.257 18:06:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:20.257 [2024-11-26 18:06:08.256830] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:20.257 [2024-11-26 18:06:08.256903] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489295 ] 00:10:20.515 [2024-11-26 18:06:08.325502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:20.515 [2024-11-26 18:06:08.387875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.515 [2024-11-26 18:06:08.387880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.515 18:06:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.515 18:06:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:20.515 18:06:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:21.080 Malloc0 00:10:21.080 18:06:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:21.338 Malloc1 00:10:21.338 18:06:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.338 18:06:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:21.596 /dev/nbd0 00:10:21.596 18:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:21.596 18:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:21.596 1+0 records in 00:10:21.596 1+0 records out 00:10:21.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024516 s, 16.7 MB/s 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.596 18:06:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:21.596 18:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:21.596 18:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.596 18:06:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:21.854 /dev/nbd1 00:10:21.854 18:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:21.854 18:06:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:21.854 1+0 records in 00:10:21.854 1+0 records out 00:10:21.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157758 s, 26.0 MB/s 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:21.854 18:06:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:21.854 18:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:21.854 18:06:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.854 18:06:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:21.854 18:06:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.854 18:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:22.112 { 00:10:22.112 "nbd_device": "/dev/nbd0", 00:10:22.112 "bdev_name": "Malloc0" 00:10:22.112 }, 00:10:22.112 { 00:10:22.112 "nbd_device": "/dev/nbd1", 00:10:22.112 "bdev_name": "Malloc1" 00:10:22.112 } 00:10:22.112 ]' 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:22.112 { 00:10:22.112 "nbd_device": "/dev/nbd0", 00:10:22.112 "bdev_name": "Malloc0" 00:10:22.112 }, 00:10:22.112 { 00:10:22.112 "nbd_device": "/dev/nbd1", 00:10:22.112 "bdev_name": "Malloc1" 00:10:22.112 } 00:10:22.112 ]' 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:22.112 /dev/nbd1' 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:22.112 /dev/nbd1' 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:22.112 18:06:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:22.372 256+0 records in 00:10:22.372 256+0 records out 00:10:22.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0039357 s, 266 MB/s 00:10:22.372 18:06:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:22.372 18:06:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:22.372 256+0 records in 00:10:22.372 256+0 records out 00:10:22.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212478 s, 49.3 MB/s 00:10:22.372 18:06:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:22.372 18:06:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:22.372 256+0 records in 00:10:22.372 256+0 records out 00:10:22.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228691 s, 45.9 MB/s 00:10:22.372 18:06:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:22.372 18:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.372 18:06:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:22.372 18:06:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.373 18:06:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.665 18:06:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.921 18:06:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:23.178 18:06:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:23.178 18:06:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:23.435 18:06:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:23.693 [2024-11-26 18:06:11.601909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:23.693 [2024-11-26 18:06:11.658921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.693 [2024-11-26 18:06:11.658921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.950 [2024-11-26 18:06:11.719325] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:23.950 [2024-11-26 18:06:11.719401] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:26.473 18:06:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:26.473 18:06:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:26.473 spdk_app_start Round 1 00:10:26.473 18:06:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 489295 /var/tmp/spdk-nbd.sock 00:10:26.473 18:06:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 489295 ']' 00:10:26.473 18:06:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:26.473 18:06:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:26.473 18:06:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:26.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:26.473 18:06:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:26.473 18:06:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:26.731 18:06:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.731 18:06:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:26.731 18:06:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:26.988 Malloc0 00:10:26.988 18:06:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:27.246 Malloc1 00:10:27.246 18:06:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.246 18:06:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:27.811 /dev/nbd0 00:10:27.811 18:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:27.811 18:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:27.811 18:06:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:27.811 18:06:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:27.811 18:06:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.811 18:06:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.811 18:06:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:27.811 18:06:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:27.811 18:06:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.811 18:06:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.812 18:06:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:27.812 1+0 records in 00:10:27.812 1+0 records out 00:10:27.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00013243 s, 30.9 MB/s 00:10:27.812 18:06:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:27.812 18:06:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:27.812 18:06:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:27.812 18:06:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.812 18:06:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:27.812 18:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.812 18:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.812 18:06:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:28.069 /dev/nbd1 00:10:28.069 18:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:28.069 18:06:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:28.069 1+0 records in 00:10:28.069 1+0 records out 00:10:28.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164924 s, 24.8 MB/s 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.069 18:06:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:28.070 18:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.070 18:06:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:28.070 18:06:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:28.070 18:06:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.070 18:06:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:28.327 { 00:10:28.327 "nbd_device": "/dev/nbd0", 00:10:28.327 "bdev_name": "Malloc0" 00:10:28.327 }, 00:10:28.327 { 00:10:28.327 "nbd_device": "/dev/nbd1", 00:10:28.327 "bdev_name": "Malloc1" 00:10:28.327 } 00:10:28.327 ]' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:28.327 { 00:10:28.327 "nbd_device": "/dev/nbd0", 00:10:28.327 "bdev_name": "Malloc0" 00:10:28.327 }, 00:10:28.327 { 00:10:28.327 "nbd_device": "/dev/nbd1", 00:10:28.327 "bdev_name": "Malloc1" 00:10:28.327 } 00:10:28.327 ]' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:28.327 /dev/nbd1' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:28.327 /dev/nbd1' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:28.327 256+0 records in 00:10:28.327 256+0 records out 00:10:28.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00385757 s, 272 MB/s 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:28.327 256+0 records in 00:10:28.327 256+0 records out 00:10:28.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204736 s, 51.2 MB/s 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:28.327 256+0 records in 00:10:28.327 256+0 records out 00:10:28.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220831 s, 47.5 MB/s 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:28.327 18:06:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.328 18:06:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.585 18:06:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:28.843 18:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.101 18:06:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:29.358 18:06:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:29.358 18:06:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:29.616 18:06:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:29.874 [2024-11-26 18:06:17.686149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:29.874 [2024-11-26 18:06:17.742083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.874 [2024-11-26 18:06:17.742083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.874 [2024-11-26 18:06:17.802728] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:29.874 [2024-11-26 18:06:17.802790] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:33.152 18:06:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:33.152 18:06:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:33.152 spdk_app_start Round 2 00:10:33.152 18:06:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 489295 /var/tmp/spdk-nbd.sock 00:10:33.152 18:06:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 489295 ']' 00:10:33.152 18:06:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:33.152 18:06:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.152 18:06:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:33.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:33.152 18:06:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.152 18:06:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:33.152 18:06:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.152 18:06:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:33.152 18:06:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:33.152 Malloc0 00:10:33.152 18:06:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:33.410 Malloc1 00:10:33.410 18:06:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.410 18:06:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:33.668 /dev/nbd0 00:10:33.668 18:06:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:33.668 18:06:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:33.668 1+0 records in 00:10:33.668 1+0 records out 00:10:33.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255862 s, 16.0 MB/s 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.668 18:06:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:33.668 18:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:33.668 18:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.668 18:06:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:34.234 /dev/nbd1 00:10:34.234 18:06:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:34.234 18:06:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:34.234 1+0 records in 00:10:34.234 1+0 records out 00:10:34.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161569 s, 25.4 MB/s 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.234 18:06:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:10:34.234 18:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.234 18:06:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:34.234 18:06:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:34.234 18:06:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.234 18:06:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:34.492 { 00:10:34.492 "nbd_device": "/dev/nbd0", 00:10:34.492 "bdev_name": "Malloc0" 00:10:34.492 }, 00:10:34.492 { 00:10:34.492 "nbd_device": "/dev/nbd1", 00:10:34.492 "bdev_name": "Malloc1" 00:10:34.492 } 00:10:34.492 ]' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:34.492 { 00:10:34.492 "nbd_device": "/dev/nbd0", 00:10:34.492 "bdev_name": "Malloc0" 00:10:34.492 }, 00:10:34.492 { 00:10:34.492 "nbd_device": "/dev/nbd1", 00:10:34.492 "bdev_name": "Malloc1" 00:10:34.492 } 00:10:34.492 ]' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:34.492 /dev/nbd1' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:34.492 /dev/nbd1' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:34.492 256+0 records in 00:10:34.492 256+0 records out 00:10:34.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509623 s, 206 MB/s 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:34.492 256+0 records in 00:10:34.492 256+0 records out 00:10:34.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200416 s, 52.3 MB/s 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:34.492 256+0 records in 00:10:34.492 256+0 records out 00:10:34.492 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222542 s, 47.1 MB/s 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.492 18:06:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.750 18:06:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.008 18:06:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:35.265 18:06:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:35.522 18:06:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:35.522 18:06:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:35.522 18:06:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:35.780 18:06:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:35.780 [2024-11-26 18:06:23.777596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.038 [2024-11-26 18:06:23.833410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.038 [2024-11-26 18:06:23.833415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.038 [2024-11-26 18:06:23.889137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:36.038 [2024-11-26 18:06:23.889199] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:38.738 18:06:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 489295 /var/tmp/spdk-nbd.sock 00:10:38.738 18:06:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 489295 ']' 00:10:38.738 18:06:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:38.738 18:06:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.738 18:06:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:38.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:38.738 18:06:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.738 18:06:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:38.996 18:06:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.996 18:06:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:38.996 18:06:26 event.app_repeat -- event/event.sh@39 -- # killprocess 489295 00:10:38.996 18:06:26 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 489295 ']' 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 489295 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 489295 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 489295' 00:10:38.997 killing process with pid 489295 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@973 -- # kill 489295 00:10:38.997 18:06:26 event.app_repeat -- common/autotest_common.sh@978 -- # wait 489295 00:10:39.255 spdk_app_start is called in Round 0. 00:10:39.255 Shutdown signal received, stop current app iteration 00:10:39.255 Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 reinitialization... 00:10:39.255 spdk_app_start is called in Round 1. 00:10:39.255 Shutdown signal received, stop current app iteration 00:10:39.255 Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 reinitialization... 00:10:39.255 spdk_app_start is called in Round 2. 00:10:39.255 Shutdown signal received, stop current app iteration 00:10:39.255 Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 reinitialization... 00:10:39.255 spdk_app_start is called in Round 3. 00:10:39.255 Shutdown signal received, stop current app iteration 00:10:39.255 18:06:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:39.255 18:06:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:39.255 00:10:39.255 real 0m18.835s 00:10:39.255 user 0m41.640s 00:10:39.255 sys 0m3.266s 00:10:39.255 18:06:27 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.255 18:06:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:39.255 ************************************ 00:10:39.255 END TEST app_repeat 00:10:39.255 ************************************ 00:10:39.255 18:06:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:39.255 18:06:27 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:10:39.255 18:06:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.255 18:06:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.255 18:06:27 event -- common/autotest_common.sh@10 -- # set +x 00:10:39.255 ************************************ 00:10:39.255 START TEST cpu_locks 00:10:39.255 ************************************ 00:10:39.255 18:06:27 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:10:39.255 * Looking for test storage... 00:10:39.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:10:39.255 18:06:27 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:39.255 18:06:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:10:39.255 18:06:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:39.255 18:06:27 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:39.255 18:06:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.514 18:06:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:39.514 18:06:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:39.514 18:06:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.514 18:06:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:39.514 18:06:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.514 18:06:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.514 18:06:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.514 18:06:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:39.514 18:06:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.514 18:06:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:39.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.514 --rc genhtml_branch_coverage=1 00:10:39.514 --rc genhtml_function_coverage=1 00:10:39.514 --rc genhtml_legend=1 00:10:39.514 --rc geninfo_all_blocks=1 00:10:39.514 --rc geninfo_unexecuted_blocks=1 00:10:39.514 00:10:39.514 ' 00:10:39.514 18:06:27 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:39.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.514 --rc genhtml_branch_coverage=1 00:10:39.514 --rc genhtml_function_coverage=1 00:10:39.514 --rc genhtml_legend=1 00:10:39.514 --rc geninfo_all_blocks=1 00:10:39.514 --rc geninfo_unexecuted_blocks=1 00:10:39.514 00:10:39.514 ' 00:10:39.514 18:06:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:39.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.514 --rc genhtml_branch_coverage=1 00:10:39.514 --rc genhtml_function_coverage=1 00:10:39.514 --rc genhtml_legend=1 00:10:39.514 --rc geninfo_all_blocks=1 00:10:39.514 --rc geninfo_unexecuted_blocks=1 00:10:39.514 00:10:39.514 ' 00:10:39.514 18:06:27 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:39.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.514 --rc genhtml_branch_coverage=1 00:10:39.514 --rc genhtml_function_coverage=1 00:10:39.514 --rc genhtml_legend=1 00:10:39.514 --rc geninfo_all_blocks=1 00:10:39.514 --rc geninfo_unexecuted_blocks=1 00:10:39.514 00:10:39.514 ' 00:10:39.514 18:06:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:39.514 18:06:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:39.514 18:06:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:39.514 18:06:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:39.514 18:06:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.514 18:06:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.514 18:06:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:39.514 ************************************ 00:10:39.514 START TEST default_locks 00:10:39.514 ************************************ 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=492040 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 492040 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 492040 ']' 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.514 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:39.514 [2024-11-26 18:06:27.353185] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:39.514 [2024-11-26 18:06:27.353275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492040 ] 00:10:39.514 [2024-11-26 18:06:27.420119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.514 [2024-11-26 18:06:27.479990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.773 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.773 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:39.773 18:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 492040 00:10:39.773 18:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 492040 00:10:39.773 18:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:40.031 lslocks: write error 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 492040 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 492040 ']' 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 492040 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492040 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492040' 00:10:40.031 killing process with pid 492040 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 492040 00:10:40.031 18:06:27 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 492040 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 492040 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 492040 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 492040 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 492040 ']' 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (492040) - No such process 00:10:40.597 ERROR: process (pid: 492040) is no longer running 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:40.597 00:10:40.597 real 0m1.133s 00:10:40.597 user 0m1.097s 00:10:40.597 sys 0m0.518s 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.597 18:06:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.597 ************************************ 00:10:40.597 END TEST default_locks 00:10:40.597 ************************************ 00:10:40.597 18:06:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:40.597 18:06:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.597 18:06:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.597 18:06:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.597 ************************************ 00:10:40.597 START TEST default_locks_via_rpc 00:10:40.597 ************************************ 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=492209 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 492209 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 492209 ']' 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.597 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.597 [2024-11-26 18:06:28.537020] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:40.597 [2024-11-26 18:06:28.537104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492209 ] 00:10:40.597 [2024-11-26 18:06:28.603529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.855 [2024-11-26 18:06:28.662754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.113 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.113 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:41.113 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:41.113 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 492209 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 492209 00:10:41.114 18:06:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 492209 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 492209 ']' 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 492209 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492209 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492209' 00:10:41.380 killing process with pid 492209 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 492209 00:10:41.380 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 492209 00:10:41.645 00:10:41.645 real 0m1.150s 00:10:41.645 user 0m1.108s 00:10:41.645 sys 0m0.496s 00:10:41.645 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.645 18:06:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.645 ************************************ 00:10:41.645 END TEST default_locks_via_rpc 00:10:41.645 ************************************ 00:10:41.645 18:06:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:41.645 18:06:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.645 18:06:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.645 18:06:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:41.904 ************************************ 00:10:41.904 START TEST non_locking_app_on_locked_coremask 00:10:41.904 ************************************ 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=492370 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 492370 /var/tmp/spdk.sock 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 492370 ']' 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:41.904 18:06:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:41.904 [2024-11-26 18:06:29.735993] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:41.904 [2024-11-26 18:06:29.736087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492370 ] 00:10:41.904 [2024-11-26 18:06:29.802853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.904 [2024-11-26 18:06:29.862577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.162 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:42.162 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:42.162 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=492384 00:10:42.162 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 492384 /var/tmp/spdk2.sock 00:10:42.162 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:42.162 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 492384 ']' 00:10:42.163 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:42.163 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.163 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:42.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:42.163 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.163 18:06:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:42.421 [2024-11-26 18:06:30.209881] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:42.421 [2024-11-26 18:06:30.209969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492384 ] 00:10:42.421 [2024-11-26 18:06:30.312350] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:42.421 [2024-11-26 18:06:30.312376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.421 [2024-11-26 18:06:30.426918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.354 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.354 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:43.354 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 492370 00:10:43.354 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 492370 00:10:43.354 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:43.613 lslocks: write error 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 492370 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 492370 ']' 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 492370 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492370 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492370' 00:10:43.613 killing process with pid 492370 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 492370 00:10:43.613 18:06:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 492370 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 492384 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 492384 ']' 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 492384 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492384 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492384' 00:10:44.546 killing process with pid 492384 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 492384 00:10:44.546 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 492384 00:10:45.112 00:10:45.112 real 0m3.199s 00:10:45.112 user 0m3.396s 00:10:45.112 sys 0m1.023s 00:10:45.112 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.112 18:06:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:45.112 ************************************ 00:10:45.112 END TEST non_locking_app_on_locked_coremask 00:10:45.112 ************************************ 00:10:45.112 18:06:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:45.112 18:06:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:45.112 18:06:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.112 18:06:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:45.112 ************************************ 00:10:45.112 START TEST locking_app_on_unlocked_coremask 00:10:45.112 ************************************ 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=492797 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 492797 /var/tmp/spdk.sock 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 492797 ']' 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.112 18:06:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:45.112 [2024-11-26 18:06:32.980537] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:45.112 [2024-11-26 18:06:32.980647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492797 ] 00:10:45.112 [2024-11-26 18:06:33.044865] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:45.112 [2024-11-26 18:06:33.044910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.112 [2024-11-26 18:06:33.099279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=492819 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 492819 /var/tmp/spdk2.sock 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 492819 ']' 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:45.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.369 18:06:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:45.627 [2024-11-26 18:06:33.425268] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:45.628 [2024-11-26 18:06:33.425382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid492819 ] 00:10:45.628 [2024-11-26 18:06:33.524410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.628 [2024-11-26 18:06:33.636511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.560 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.560 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:46.560 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 492819 00:10:46.560 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 492819 00:10:46.560 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:47.126 lslocks: write error 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 492797 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 492797 ']' 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 492797 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492797 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492797' 00:10:47.126 killing process with pid 492797 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 492797 00:10:47.126 18:06:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 492797 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 492819 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 492819 ']' 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 492819 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 492819 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 492819' 00:10:48.058 killing process with pid 492819 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 492819 00:10:48.058 18:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 492819 00:10:48.315 00:10:48.315 real 0m3.298s 00:10:48.315 user 0m3.521s 00:10:48.315 sys 0m1.041s 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.315 ************************************ 00:10:48.315 END TEST locking_app_on_unlocked_coremask 00:10:48.315 ************************************ 00:10:48.315 18:06:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:48.315 18:06:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:48.315 18:06:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.315 18:06:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:48.315 ************************************ 00:10:48.315 START TEST locking_app_on_locked_coremask 00:10:48.315 ************************************ 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=493199 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 493199 /var/tmp/spdk.sock 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 493199 ']' 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.315 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.574 [2024-11-26 18:06:36.329714] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:48.574 [2024-11-26 18:06:36.329795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493199 ] 00:10:48.574 [2024-11-26 18:06:36.391425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.574 [2024-11-26 18:06:36.445759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=493253 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 493253 /var/tmp/spdk2.sock 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 493253 /var/tmp/spdk2.sock 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 493253 /var/tmp/spdk2.sock 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 493253 ']' 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:48.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.832 18:06:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:48.832 [2024-11-26 18:06:36.759513] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:48.832 [2024-11-26 18:06:36.759603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493253 ] 00:10:49.091 [2024-11-26 18:06:36.857724] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 493199 has claimed it. 00:10:49.091 [2024-11-26 18:06:36.857779] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:49.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (493253) - No such process 00:10:49.656 ERROR: process (pid: 493253) is no longer running 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 493199 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 493199 00:10:49.656 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:49.914 lslocks: write error 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 493199 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 493199 ']' 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 493199 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493199 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493199' 00:10:49.914 killing process with pid 493199 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 493199 00:10:49.914 18:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 493199 00:10:50.480 00:10:50.480 real 0m1.911s 00:10:50.480 user 0m2.096s 00:10:50.480 sys 0m0.621s 00:10:50.480 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.480 18:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.480 ************************************ 00:10:50.480 END TEST locking_app_on_locked_coremask 00:10:50.480 ************************************ 00:10:50.480 18:06:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:50.480 18:06:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.480 18:06:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.480 18:06:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:50.480 ************************************ 00:10:50.480 START TEST locking_overlapped_coremask 00:10:50.480 ************************************ 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=493423 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 493423 /var/tmp/spdk.sock 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 493423 ']' 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.480 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.480 [2024-11-26 18:06:38.292457] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:50.480 [2024-11-26 18:06:38.292563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493423 ] 00:10:50.480 [2024-11-26 18:06:38.355935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:50.480 [2024-11-26 18:06:38.412427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.480 [2024-11-26 18:06:38.412449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.480 [2024-11-26 18:06:38.412452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=493548 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 493548 /var/tmp/spdk2.sock 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 493548 /var/tmp/spdk2.sock 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 493548 /var/tmp/spdk2.sock 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 493548 ']' 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:50.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:50.738 18:06:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.994 [2024-11-26 18:06:38.751548] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:50.994 [2024-11-26 18:06:38.751651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493548 ] 00:10:50.994 [2024-11-26 18:06:38.856175] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 493423 has claimed it. 00:10:50.994 [2024-11-26 18:06:38.856230] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:51.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (493548) - No such process 00:10:51.557 ERROR: process (pid: 493548) is no longer running 00:10:51.557 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.557 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 493423 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 493423 ']' 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 493423 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493423 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493423' 00:10:51.558 killing process with pid 493423 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 493423 00:10:51.558 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 493423 00:10:52.120 00:10:52.120 real 0m1.691s 00:10:52.120 user 0m4.730s 00:10:52.120 sys 0m0.459s 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:52.120 ************************************ 00:10:52.120 END TEST locking_overlapped_coremask 00:10:52.120 ************************************ 00:10:52.120 18:06:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:52.120 18:06:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:52.120 18:06:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.120 18:06:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:52.120 ************************************ 00:10:52.120 START TEST locking_overlapped_coremask_via_rpc 00:10:52.120 ************************************ 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=493720 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 493720 /var/tmp/spdk.sock 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 493720 ']' 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.120 18:06:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.120 [2024-11-26 18:06:40.034358] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:52.120 [2024-11-26 18:06:40.034460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493720 ] 00:10:52.120 [2024-11-26 18:06:40.104455] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:52.120 [2024-11-26 18:06:40.104497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.377 [2024-11-26 18:06:40.172051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.377 [2024-11-26 18:06:40.172097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.377 [2024-11-26 18:06:40.172100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=493728 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 493728 /var/tmp/spdk2.sock 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 493728 ']' 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:52.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.634 18:06:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.634 [2024-11-26 18:06:40.532466] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:52.634 [2024-11-26 18:06:40.532560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493728 ] 00:10:52.634 [2024-11-26 18:06:40.638397] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:52.634 [2024-11-26 18:06:40.638432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.891 [2024-11-26 18:06:40.766823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.891 [2024-11-26 18:06:40.766882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:52.891 [2024-11-26 18:06:40.766885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.823 [2024-11-26 18:06:41.528412] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 493720 has claimed it. 00:10:53.823 request: 00:10:53.823 { 00:10:53.823 "method": "framework_enable_cpumask_locks", 00:10:53.823 "req_id": 1 00:10:53.823 } 00:10:53.823 Got JSON-RPC error response 00:10:53.823 response: 00:10:53.823 { 00:10:53.823 "code": -32603, 00:10:53.823 "message": "Failed to claim CPU core: 2" 00:10:53.823 } 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 493720 /var/tmp/spdk.sock 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 493720 ']' 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:53.823 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 493728 /var/tmp/spdk2.sock 00:10:53.824 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 493728 ']' 00:10:53.824 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:53.824 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.824 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:53.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:53.824 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.824 18:06:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.080 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.080 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:54.080 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:54.080 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:54.080 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:54.081 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:54.081 00:10:54.081 real 0m2.105s 00:10:54.081 user 0m1.158s 00:10:54.081 sys 0m0.189s 00:10:54.081 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.081 18:06:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.081 ************************************ 00:10:54.081 END TEST locking_overlapped_coremask_via_rpc 00:10:54.081 ************************************ 00:10:54.337 18:06:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:54.337 18:06:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 493720 ]] 00:10:54.337 18:06:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 493720 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 493720 ']' 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 493720 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493720 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493720' 00:10:54.337 killing process with pid 493720 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 493720 00:10:54.337 18:06:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 493720 00:10:54.594 18:06:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 493728 ]] 00:10:54.594 18:06:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 493728 00:10:54.594 18:06:42 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 493728 ']' 00:10:54.594 18:06:42 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 493728 00:10:54.594 18:06:42 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:54.594 18:06:42 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.594 18:06:42 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493728 00:10:54.852 18:06:42 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:54.852 18:06:42 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:54.852 18:06:42 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493728' 00:10:54.852 killing process with pid 493728 00:10:54.852 18:06:42 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 493728 00:10:54.852 18:06:42 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 493728 00:10:55.110 18:06:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:55.110 18:06:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:55.110 18:06:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 493720 ]] 00:10:55.110 18:06:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 493720 00:10:55.110 18:06:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 493720 ']' 00:10:55.110 18:06:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 493720 00:10:55.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (493720) - No such process 00:10:55.111 18:06:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 493720 is not found' 00:10:55.111 Process with pid 493720 is not found 00:10:55.111 18:06:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 493728 ]] 00:10:55.111 18:06:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 493728 00:10:55.111 18:06:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 493728 ']' 00:10:55.111 18:06:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 493728 00:10:55.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (493728) - No such process 00:10:55.111 18:06:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 493728 is not found' 00:10:55.111 Process with pid 493728 is not found 00:10:55.111 18:06:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:55.111 00:10:55.111 real 0m15.948s 00:10:55.111 user 0m29.013s 00:10:55.111 sys 0m5.328s 00:10:55.111 18:06:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.111 18:06:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:55.111 ************************************ 00:10:55.111 END TEST cpu_locks 00:10:55.111 ************************************ 00:10:55.111 00:10:55.111 real 0m40.789s 00:10:55.111 user 1m20.097s 00:10:55.111 sys 0m9.410s 00:10:55.111 18:06:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.111 18:06:43 event -- common/autotest_common.sh@10 -- # set +x 00:10:55.111 ************************************ 00:10:55.111 END TEST event 00:10:55.111 ************************************ 00:10:55.111 18:06:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:55.111 18:06:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:55.111 18:06:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.111 18:06:43 -- common/autotest_common.sh@10 -- # set +x 00:10:55.369 ************************************ 00:10:55.369 START TEST thread 00:10:55.369 ************************************ 00:10:55.369 18:06:43 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:55.369 * Looking for test storage... 00:10:55.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:10:55.369 18:06:43 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:55.369 18:06:43 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:55.369 18:06:43 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:55.369 18:06:43 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:55.369 18:06:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.369 18:06:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.369 18:06:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.369 18:06:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.369 18:06:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.369 18:06:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.369 18:06:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.369 18:06:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.369 18:06:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.369 18:06:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.369 18:06:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.369 18:06:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:55.369 18:06:43 thread -- scripts/common.sh@345 -- # : 1 00:10:55.369 18:06:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.369 18:06:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.369 18:06:43 thread -- scripts/common.sh@365 -- # decimal 1 00:10:55.369 18:06:43 thread -- scripts/common.sh@353 -- # local d=1 00:10:55.369 18:06:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.369 18:06:43 thread -- scripts/common.sh@355 -- # echo 1 00:10:55.369 18:06:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.369 18:06:43 thread -- scripts/common.sh@366 -- # decimal 2 00:10:55.369 18:06:43 thread -- scripts/common.sh@353 -- # local d=2 00:10:55.369 18:06:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.369 18:06:43 thread -- scripts/common.sh@355 -- # echo 2 00:10:55.369 18:06:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.369 18:06:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.369 18:06:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.369 18:06:43 thread -- scripts/common.sh@368 -- # return 0 00:10:55.369 18:06:43 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.369 18:06:43 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:55.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.369 --rc genhtml_branch_coverage=1 00:10:55.369 --rc genhtml_function_coverage=1 00:10:55.369 --rc genhtml_legend=1 00:10:55.369 --rc geninfo_all_blocks=1 00:10:55.369 --rc geninfo_unexecuted_blocks=1 00:10:55.369 00:10:55.369 ' 00:10:55.369 18:06:43 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:55.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.369 --rc genhtml_branch_coverage=1 00:10:55.369 --rc genhtml_function_coverage=1 00:10:55.369 --rc genhtml_legend=1 00:10:55.369 --rc geninfo_all_blocks=1 00:10:55.370 --rc geninfo_unexecuted_blocks=1 00:10:55.370 00:10:55.370 ' 00:10:55.370 18:06:43 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:55.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.370 --rc genhtml_branch_coverage=1 00:10:55.370 --rc genhtml_function_coverage=1 00:10:55.370 --rc genhtml_legend=1 00:10:55.370 --rc geninfo_all_blocks=1 00:10:55.370 --rc geninfo_unexecuted_blocks=1 00:10:55.370 00:10:55.370 ' 00:10:55.370 18:06:43 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:55.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.370 --rc genhtml_branch_coverage=1 00:10:55.370 --rc genhtml_function_coverage=1 00:10:55.370 --rc genhtml_legend=1 00:10:55.370 --rc geninfo_all_blocks=1 00:10:55.370 --rc geninfo_unexecuted_blocks=1 00:10:55.370 00:10:55.370 ' 00:10:55.370 18:06:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:55.370 18:06:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:55.370 18:06:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.370 18:06:43 thread -- common/autotest_common.sh@10 -- # set +x 00:10:55.370 ************************************ 00:10:55.370 START TEST thread_poller_perf 00:10:55.370 ************************************ 00:10:55.370 18:06:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:55.370 [2024-11-26 18:06:43.326276] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:55.370 [2024-11-26 18:06:43.326392] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494225 ] 00:10:55.628 [2024-11-26 18:06:43.394775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.628 [2024-11-26 18:06:43.452725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.628 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:56.560 [2024-11-26T17:06:44.571Z] ====================================== 00:10:56.560 [2024-11-26T17:06:44.571Z] busy:2710434210 (cyc) 00:10:56.560 [2024-11-26T17:06:44.571Z] total_run_count: 367000 00:10:56.560 [2024-11-26T17:06:44.571Z] tsc_hz: 2700000000 (cyc) 00:10:56.560 [2024-11-26T17:06:44.571Z] ====================================== 00:10:56.560 [2024-11-26T17:06:44.571Z] poller_cost: 7385 (cyc), 2735 (nsec) 00:10:56.560 00:10:56.560 real 0m1.210s 00:10:56.560 user 0m1.138s 00:10:56.560 sys 0m0.067s 00:10:56.560 18:06:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.560 18:06:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:56.560 ************************************ 00:10:56.560 END TEST thread_poller_perf 00:10:56.560 ************************************ 00:10:56.560 18:06:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:56.560 18:06:44 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:56.560 18:06:44 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.560 18:06:44 thread -- common/autotest_common.sh@10 -- # set +x 00:10:56.818 ************************************ 00:10:56.818 START TEST thread_poller_perf 00:10:56.818 ************************************ 00:10:56.818 18:06:44 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:56.818 [2024-11-26 18:06:44.590329] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:56.818 [2024-11-26 18:06:44.590399] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494379 ] 00:10:56.818 [2024-11-26 18:06:44.655795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.819 [2024-11-26 18:06:44.713770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.819 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:58.195 [2024-11-26T17:06:46.206Z] ====================================== 00:10:58.195 [2024-11-26T17:06:46.206Z] busy:2702045646 (cyc) 00:10:58.195 [2024-11-26T17:06:46.206Z] total_run_count: 4869000 00:10:58.195 [2024-11-26T17:06:46.206Z] tsc_hz: 2700000000 (cyc) 00:10:58.195 [2024-11-26T17:06:46.206Z] ====================================== 00:10:58.195 [2024-11-26T17:06:46.206Z] poller_cost: 554 (cyc), 205 (nsec) 00:10:58.195 00:10:58.195 real 0m1.202s 00:10:58.195 user 0m1.130s 00:10:58.195 sys 0m0.067s 00:10:58.195 18:06:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.195 18:06:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:58.195 ************************************ 00:10:58.195 END TEST thread_poller_perf 00:10:58.195 ************************************ 00:10:58.195 18:06:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:58.195 00:10:58.195 real 0m2.656s 00:10:58.195 user 0m2.400s 00:10:58.196 sys 0m0.261s 00:10:58.196 18:06:45 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.196 18:06:45 thread -- common/autotest_common.sh@10 -- # set +x 00:10:58.196 ************************************ 00:10:58.196 END TEST thread 00:10:58.196 ************************************ 00:10:58.196 18:06:45 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:58.196 18:06:45 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:58.196 18:06:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:58.196 18:06:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.196 18:06:45 -- common/autotest_common.sh@10 -- # set +x 00:10:58.196 ************************************ 00:10:58.196 START TEST app_cmdline 00:10:58.196 ************************************ 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:58.196 * Looking for test storage... 00:10:58.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.196 18:06:45 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:58.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.196 --rc genhtml_branch_coverage=1 00:10:58.196 --rc genhtml_function_coverage=1 00:10:58.196 --rc genhtml_legend=1 00:10:58.196 --rc geninfo_all_blocks=1 00:10:58.196 --rc geninfo_unexecuted_blocks=1 00:10:58.196 00:10:58.196 ' 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:58.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.196 --rc genhtml_branch_coverage=1 00:10:58.196 --rc genhtml_function_coverage=1 00:10:58.196 --rc genhtml_legend=1 00:10:58.196 --rc geninfo_all_blocks=1 00:10:58.196 --rc geninfo_unexecuted_blocks=1 00:10:58.196 00:10:58.196 ' 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:58.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.196 --rc genhtml_branch_coverage=1 00:10:58.196 --rc genhtml_function_coverage=1 00:10:58.196 --rc genhtml_legend=1 00:10:58.196 --rc geninfo_all_blocks=1 00:10:58.196 --rc geninfo_unexecuted_blocks=1 00:10:58.196 00:10:58.196 ' 00:10:58.196 18:06:45 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:58.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.196 --rc genhtml_branch_coverage=1 00:10:58.196 --rc genhtml_function_coverage=1 00:10:58.196 --rc genhtml_legend=1 00:10:58.196 --rc geninfo_all_blocks=1 00:10:58.196 --rc geninfo_unexecuted_blocks=1 00:10:58.196 00:10:58.196 ' 00:10:58.196 18:06:45 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:58.196 18:06:45 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=494583 00:10:58.196 18:06:45 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:58.196 18:06:45 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 494583 00:10:58.196 18:06:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 494583 ']' 00:10:58.196 18:06:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.196 18:06:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.196 18:06:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.196 18:06:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.196 18:06:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:58.196 [2024-11-26 18:06:46.052678] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:10:58.196 [2024-11-26 18:06:46.052750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid494583 ] 00:10:58.196 [2024-11-26 18:06:46.118870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.196 [2024-11-26 18:06:46.177166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.454 18:06:46 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.454 18:06:46 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:58.454 18:06:46 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:58.712 { 00:10:58.712 "version": "SPDK v25.01-pre git sha1 3c5c3d590", 00:10:58.712 "fields": { 00:10:58.712 "major": 25, 00:10:58.712 "minor": 1, 00:10:58.712 "patch": 0, 00:10:58.712 "suffix": "-pre", 00:10:58.712 "commit": "3c5c3d590" 00:10:58.712 } 00:10:58.712 } 00:10:58.712 18:06:46 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:58.712 18:06:46 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:58.712 18:06:46 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:58.712 18:06:46 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:58.712 18:06:46 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:58.712 18:06:46 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.712 18:06:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:58.712 18:06:46 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:58.712 18:06:46 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:58.712 18:06:46 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.025 18:06:46 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:59.025 18:06:46 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:59.025 18:06:46 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:59.025 18:06:46 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:59.025 request: 00:10:59.025 { 00:10:59.025 "method": "env_dpdk_get_mem_stats", 00:10:59.025 "req_id": 1 00:10:59.025 } 00:10:59.025 Got JSON-RPC error response 00:10:59.025 response: 00:10:59.025 { 00:10:59.025 "code": -32601, 00:10:59.025 "message": "Method not found" 00:10:59.025 } 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.025 18:06:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 494583 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 494583 ']' 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 494583 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.025 18:06:47 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 494583 00:10:59.283 18:06:47 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.283 18:06:47 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.283 18:06:47 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 494583' 00:10:59.283 killing process with pid 494583 00:10:59.283 18:06:47 app_cmdline -- common/autotest_common.sh@973 -- # kill 494583 00:10:59.283 18:06:47 app_cmdline -- common/autotest_common.sh@978 -- # wait 494583 00:10:59.542 00:10:59.542 real 0m1.636s 00:10:59.542 user 0m2.029s 00:10:59.542 sys 0m0.486s 00:10:59.542 18:06:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.542 18:06:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:59.542 ************************************ 00:10:59.542 END TEST app_cmdline 00:10:59.542 ************************************ 00:10:59.542 18:06:47 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:59.542 18:06:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:59.542 18:06:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.542 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:10:59.542 ************************************ 00:10:59.542 START TEST version 00:10:59.542 ************************************ 00:10:59.542 18:06:47 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:59.800 * Looking for test storage... 00:10:59.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:59.800 18:06:47 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:59.800 18:06:47 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:59.800 18:06:47 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:59.800 18:06:47 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:59.800 18:06:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.800 18:06:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.800 18:06:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.800 18:06:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.800 18:06:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.800 18:06:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.800 18:06:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.800 18:06:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.800 18:06:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.800 18:06:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.800 18:06:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.800 18:06:47 version -- scripts/common.sh@344 -- # case "$op" in 00:10:59.800 18:06:47 version -- scripts/common.sh@345 -- # : 1 00:10:59.800 18:06:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.800 18:06:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.800 18:06:47 version -- scripts/common.sh@365 -- # decimal 1 00:10:59.800 18:06:47 version -- scripts/common.sh@353 -- # local d=1 00:10:59.800 18:06:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.800 18:06:47 version -- scripts/common.sh@355 -- # echo 1 00:10:59.801 18:06:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.801 18:06:47 version -- scripts/common.sh@366 -- # decimal 2 00:10:59.801 18:06:47 version -- scripts/common.sh@353 -- # local d=2 00:10:59.801 18:06:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.801 18:06:47 version -- scripts/common.sh@355 -- # echo 2 00:10:59.801 18:06:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.801 18:06:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.801 18:06:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.801 18:06:47 version -- scripts/common.sh@368 -- # return 0 00:10:59.801 18:06:47 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.801 18:06:47 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:59.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.801 --rc genhtml_branch_coverage=1 00:10:59.801 --rc genhtml_function_coverage=1 00:10:59.801 --rc genhtml_legend=1 00:10:59.801 --rc geninfo_all_blocks=1 00:10:59.801 --rc geninfo_unexecuted_blocks=1 00:10:59.801 00:10:59.801 ' 00:10:59.801 18:06:47 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:59.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.801 --rc genhtml_branch_coverage=1 00:10:59.801 --rc genhtml_function_coverage=1 00:10:59.801 --rc genhtml_legend=1 00:10:59.801 --rc geninfo_all_blocks=1 00:10:59.801 --rc geninfo_unexecuted_blocks=1 00:10:59.801 00:10:59.801 ' 00:10:59.801 18:06:47 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:59.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.801 --rc genhtml_branch_coverage=1 00:10:59.801 --rc genhtml_function_coverage=1 00:10:59.801 --rc genhtml_legend=1 00:10:59.801 --rc geninfo_all_blocks=1 00:10:59.801 --rc geninfo_unexecuted_blocks=1 00:10:59.801 00:10:59.801 ' 00:10:59.801 18:06:47 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:59.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.801 --rc genhtml_branch_coverage=1 00:10:59.801 --rc genhtml_function_coverage=1 00:10:59.801 --rc genhtml_legend=1 00:10:59.801 --rc geninfo_all_blocks=1 00:10:59.801 --rc geninfo_unexecuted_blocks=1 00:10:59.801 00:10:59.801 ' 00:10:59.801 18:06:47 version -- app/version.sh@17 -- # get_header_version major 00:10:59.801 18:06:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:59.801 18:06:47 version -- app/version.sh@14 -- # cut -f2 00:10:59.801 18:06:47 version -- app/version.sh@14 -- # tr -d '"' 00:10:59.801 18:06:47 version -- app/version.sh@17 -- # major=25 00:10:59.801 18:06:47 version -- app/version.sh@18 -- # get_header_version minor 00:10:59.801 18:06:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:59.801 18:06:47 version -- app/version.sh@14 -- # cut -f2 00:10:59.801 18:06:47 version -- app/version.sh@14 -- # tr -d '"' 00:10:59.801 18:06:47 version -- app/version.sh@18 -- # minor=1 00:10:59.801 18:06:47 version -- app/version.sh@19 -- # get_header_version patch 00:10:59.801 18:06:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:59.801 18:06:47 version -- app/version.sh@14 -- # cut -f2 00:10:59.801 18:06:47 version -- app/version.sh@14 -- # tr -d '"' 00:10:59.801 18:06:47 version -- app/version.sh@19 -- # patch=0 00:10:59.801 18:06:47 version -- app/version.sh@20 -- # get_header_version suffix 00:10:59.801 18:06:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:59.801 18:06:47 version -- app/version.sh@14 -- # cut -f2 00:10:59.801 18:06:47 version -- app/version.sh@14 -- # tr -d '"' 00:10:59.801 18:06:47 version -- app/version.sh@20 -- # suffix=-pre 00:10:59.801 18:06:47 version -- app/version.sh@22 -- # version=25.1 00:10:59.801 18:06:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:59.801 18:06:47 version -- app/version.sh@28 -- # version=25.1rc0 00:10:59.801 18:06:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:59.801 18:06:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:59.801 18:06:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:59.801 18:06:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:59.801 00:10:59.801 real 0m0.200s 00:10:59.801 user 0m0.123s 00:10:59.801 sys 0m0.102s 00:10:59.801 18:06:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.801 18:06:47 version -- common/autotest_common.sh@10 -- # set +x 00:10:59.801 ************************************ 00:10:59.801 END TEST version 00:10:59.801 ************************************ 00:10:59.801 18:06:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:59.801 18:06:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:59.801 18:06:47 -- spdk/autotest.sh@194 -- # uname -s 00:10:59.801 18:06:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:59.801 18:06:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:59.801 18:06:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:59.801 18:06:47 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:59.801 18:06:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:59.801 18:06:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:59.801 18:06:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.801 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:10:59.801 18:06:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:59.801 18:06:47 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:10:59.801 18:06:47 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:10:59.801 18:06:47 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:10:59.801 18:06:47 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:10:59.801 18:06:47 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:10:59.801 18:06:47 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:59.801 18:06:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.801 18:06:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.801 18:06:47 -- common/autotest_common.sh@10 -- # set +x 00:11:00.059 ************************************ 00:11:00.059 START TEST nvmf_tcp 00:11:00.059 ************************************ 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:00.059 * Looking for test storage... 00:11:00.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.059 18:06:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.059 --rc genhtml_branch_coverage=1 00:11:00.059 --rc genhtml_function_coverage=1 00:11:00.059 --rc genhtml_legend=1 00:11:00.059 --rc geninfo_all_blocks=1 00:11:00.059 --rc geninfo_unexecuted_blocks=1 00:11:00.059 00:11:00.059 ' 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.059 --rc genhtml_branch_coverage=1 00:11:00.059 --rc genhtml_function_coverage=1 00:11:00.059 --rc genhtml_legend=1 00:11:00.059 --rc geninfo_all_blocks=1 00:11:00.059 --rc geninfo_unexecuted_blocks=1 00:11:00.059 00:11:00.059 ' 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.059 --rc genhtml_branch_coverage=1 00:11:00.059 --rc genhtml_function_coverage=1 00:11:00.059 --rc genhtml_legend=1 00:11:00.059 --rc geninfo_all_blocks=1 00:11:00.059 --rc geninfo_unexecuted_blocks=1 00:11:00.059 00:11:00.059 ' 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.059 --rc genhtml_branch_coverage=1 00:11:00.059 --rc genhtml_function_coverage=1 00:11:00.059 --rc genhtml_legend=1 00:11:00.059 --rc geninfo_all_blocks=1 00:11:00.059 --rc geninfo_unexecuted_blocks=1 00:11:00.059 00:11:00.059 ' 00:11:00.059 18:06:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:00.059 18:06:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:00.059 18:06:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.059 18:06:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:00.059 ************************************ 00:11:00.059 START TEST nvmf_target_core 00:11:00.059 ************************************ 00:11:00.059 18:06:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:11:00.060 * Looking for test storage... 00:11:00.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:00.060 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.060 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.060 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.319 --rc genhtml_branch_coverage=1 00:11:00.319 --rc genhtml_function_coverage=1 00:11:00.319 --rc genhtml_legend=1 00:11:00.319 --rc geninfo_all_blocks=1 00:11:00.319 --rc geninfo_unexecuted_blocks=1 00:11:00.319 00:11:00.319 ' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.319 --rc genhtml_branch_coverage=1 00:11:00.319 --rc genhtml_function_coverage=1 00:11:00.319 --rc genhtml_legend=1 00:11:00.319 --rc geninfo_all_blocks=1 00:11:00.319 --rc geninfo_unexecuted_blocks=1 00:11:00.319 00:11:00.319 ' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.319 --rc genhtml_branch_coverage=1 00:11:00.319 --rc genhtml_function_coverage=1 00:11:00.319 --rc genhtml_legend=1 00:11:00.319 --rc geninfo_all_blocks=1 00:11:00.319 --rc geninfo_unexecuted_blocks=1 00:11:00.319 00:11:00.319 ' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.319 --rc genhtml_branch_coverage=1 00:11:00.319 --rc genhtml_function_coverage=1 00:11:00.319 --rc genhtml_legend=1 00:11:00.319 --rc geninfo_all_blocks=1 00:11:00.319 --rc geninfo_unexecuted_blocks=1 00:11:00.319 00:11:00.319 ' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.319 18:06:48 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.320 ************************************ 00:11:00.320 START TEST nvmf_abort 00:11:00.320 ************************************ 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:00.320 * Looking for test storage... 00:11:00.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:11:00.320 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.580 --rc genhtml_branch_coverage=1 00:11:00.580 --rc genhtml_function_coverage=1 00:11:00.580 --rc genhtml_legend=1 00:11:00.580 --rc geninfo_all_blocks=1 00:11:00.580 --rc geninfo_unexecuted_blocks=1 00:11:00.580 00:11:00.580 ' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.580 --rc genhtml_branch_coverage=1 00:11:00.580 --rc genhtml_function_coverage=1 00:11:00.580 --rc genhtml_legend=1 00:11:00.580 --rc geninfo_all_blocks=1 00:11:00.580 --rc geninfo_unexecuted_blocks=1 00:11:00.580 00:11:00.580 ' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.580 --rc genhtml_branch_coverage=1 00:11:00.580 --rc genhtml_function_coverage=1 00:11:00.580 --rc genhtml_legend=1 00:11:00.580 --rc geninfo_all_blocks=1 00:11:00.580 --rc geninfo_unexecuted_blocks=1 00:11:00.580 00:11:00.580 ' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:00.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:00.580 --rc genhtml_branch_coverage=1 00:11:00.580 --rc genhtml_function_coverage=1 00:11:00.580 --rc genhtml_legend=1 00:11:00.580 --rc geninfo_all_blocks=1 00:11:00.580 --rc geninfo_unexecuted_blocks=1 00:11:00.580 00:11:00.580 ' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:11:00.580 18:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.481 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.481 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:02.740 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:02.740 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:02.740 Found net devices under 0000:09:00.0: cvl_0_0 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:02.740 Found net devices under 0000:09:00.1: cvl_0_1 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.740 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:02.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:02.741 00:11:02.741 --- 10.0.0.2 ping statistics --- 00:11:02.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.741 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:11:02.741 00:11:02.741 --- 10.0.0.1 ping statistics --- 00:11:02.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.741 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=496672 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 496672 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 496672 ']' 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.741 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.741 [2024-11-26 18:06:50.720460] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:11:02.741 [2024-11-26 18:06:50.720547] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.999 [2024-11-26 18:06:50.799073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.999 [2024-11-26 18:06:50.859067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.999 [2024-11-26 18:06:50.859126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.999 [2024-11-26 18:06:50.859153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.999 [2024-11-26 18:06:50.859164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.999 [2024-11-26 18:06:50.859173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.999 [2024-11-26 18:06:50.860755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.999 [2024-11-26 18:06:50.860822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.999 [2024-11-26 18:06:50.860818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.999 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.999 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:11:02.999 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:02.999 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:02.999 18:06:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:02.999 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.999 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:02.999 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.999 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:03.000 [2024-11-26 18:06:51.007387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:03.258 Malloc0 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:03.258 Delay0 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:03.258 [2024-11-26 18:06:51.079877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.258 18:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:03.258 [2024-11-26 18:06:51.154251] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:05.789 Initializing NVMe Controllers 00:11:05.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:05.789 controller IO queue size 128 less than required 00:11:05.789 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:05.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:05.789 Initialization complete. Launching workers. 00:11:05.789 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28948 00:11:05.789 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29009, failed to submit 62 00:11:05.789 success 28952, unsuccessful 57, failed 0 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.789 rmmod nvme_tcp 00:11:05.789 rmmod nvme_fabrics 00:11:05.789 rmmod nvme_keyring 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 496672 ']' 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 496672 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 496672 ']' 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 496672 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 496672 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 496672' 00:11:05.789 killing process with pid 496672 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 496672 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 496672 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.789 18:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.697 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:07.697 00:11:07.697 real 0m7.448s 00:11:07.697 user 0m10.563s 00:11:07.697 sys 0m2.624s 00:11:07.697 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.697 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:07.697 ************************************ 00:11:07.697 END TEST nvmf_abort 00:11:07.697 ************************************ 00:11:07.697 18:06:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:07.697 18:06:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:07.697 18:06:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.697 18:06:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:07.697 ************************************ 00:11:07.697 START TEST nvmf_ns_hotplug_stress 00:11:07.697 ************************************ 00:11:07.697 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:07.957 * Looking for test storage... 00:11:07.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:07.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.957 --rc genhtml_branch_coverage=1 00:11:07.957 --rc genhtml_function_coverage=1 00:11:07.957 --rc genhtml_legend=1 00:11:07.957 --rc geninfo_all_blocks=1 00:11:07.957 --rc geninfo_unexecuted_blocks=1 00:11:07.957 00:11:07.957 ' 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:07.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.957 --rc genhtml_branch_coverage=1 00:11:07.957 --rc genhtml_function_coverage=1 00:11:07.957 --rc genhtml_legend=1 00:11:07.957 --rc geninfo_all_blocks=1 00:11:07.957 --rc geninfo_unexecuted_blocks=1 00:11:07.957 00:11:07.957 ' 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:07.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.957 --rc genhtml_branch_coverage=1 00:11:07.957 --rc genhtml_function_coverage=1 00:11:07.957 --rc genhtml_legend=1 00:11:07.957 --rc geninfo_all_blocks=1 00:11:07.957 --rc geninfo_unexecuted_blocks=1 00:11:07.957 00:11:07.957 ' 00:11:07.957 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:07.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:07.958 --rc genhtml_branch_coverage=1 00:11:07.958 --rc genhtml_function_coverage=1 00:11:07.958 --rc genhtml_legend=1 00:11:07.958 --rc geninfo_all_blocks=1 00:11:07.958 --rc geninfo_unexecuted_blocks=1 00:11:07.958 00:11:07.958 ' 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:07.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:07.958 18:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.492 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:10.493 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:10.493 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:10.493 Found net devices under 0000:09:00.0: cvl_0_0 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:10.493 Found net devices under 0000:09:00.1: cvl_0_1 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:10.493 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:10.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:11:10.494 00:11:10.494 --- 10.0.0.2 ping statistics --- 00:11:10.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.494 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:10.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:11:10.494 00:11:10.494 --- 10.0.0.1 ping statistics --- 00:11:10.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.494 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=499029 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 499029 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 499029 ']' 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.494 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.494 [2024-11-26 18:06:58.245698] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:11:10.494 [2024-11-26 18:06:58.245775] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.494 [2024-11-26 18:06:58.318177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:10.494 [2024-11-26 18:06:58.376454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.494 [2024-11-26 18:06:58.376505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.494 [2024-11-26 18:06:58.376519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.494 [2024-11-26 18:06:58.376529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.494 [2024-11-26 18:06:58.376540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.494 [2024-11-26 18:06:58.378157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.494 [2024-11-26 18:06:58.378210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.494 [2024-11-26 18:06:58.378214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.751 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.751 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:11:10.751 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:10.751 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.751 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:10.751 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.751 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:10.751 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:11.009 [2024-11-26 18:06:58.790107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:11.009 18:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:11.275 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.534 [2024-11-26 18:06:59.320872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.534 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:11.792 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:12.051 Malloc0 00:11:12.051 18:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:12.309 Delay0 00:11:12.309 18:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.567 18:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:12.824 NULL1 00:11:12.824 18:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:13.081 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=499454 00:11:13.081 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:13.081 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:13.081 18:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.453 Read completed with error (sct=0, sc=11) 00:11:14.453 18:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:14.711 18:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:14.711 18:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:14.969 true 00:11:14.969 18:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:14.969 18:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:15.534 18:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.049 18:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:16.049 18:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:16.335 true 00:11:16.335 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:16.335 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.636 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.893 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:16.894 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:16.894 true 00:11:16.894 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:16.894 18:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.458 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.458 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:17.458 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:17.715 true 00:11:17.973 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:17.973 18:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.906 18:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.906 18:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:18.906 18:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:19.163 true 00:11:19.163 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:19.163 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.728 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.728 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:19.728 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:19.985 true 00:11:19.985 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:19.985 18:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.242 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.807 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:20.807 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:20.807 true 00:11:20.807 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:20.807 18:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.735 18:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.993 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:22.288 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:22.288 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:22.288 true 00:11:22.288 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:22.288 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.852 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.852 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:22.852 18:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:23.109 true 00:11:23.366 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:23.366 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.623 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.879 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:23.879 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:24.136 true 00:11:24.137 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:24.137 18:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.068 18:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.325 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:25.325 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:25.582 true 00:11:25.582 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:25.582 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.839 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.097 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:26.097 18:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:26.354 true 00:11:26.355 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:26.355 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.614 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.871 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:26.871 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:27.128 true 00:11:27.128 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:27.128 18:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.060 18:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.317 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:28.317 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:28.595 true 00:11:28.595 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:28.596 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.853 18:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.110 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:29.110 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:29.368 true 00:11:29.368 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:29.368 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.625 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.883 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:29.883 18:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:30.140 true 00:11:30.140 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:30.140 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.073 18:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.329 18:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:31.329 18:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:31.586 true 00:11:31.586 18:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:31.586 18:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.843 18:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.100 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:32.100 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:32.358 true 00:11:32.358 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:32.358 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.615 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.873 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:32.873 18:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:33.130 true 00:11:33.386 18:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:33.386 18:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.317 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:34.317 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.317 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:34.317 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:34.573 true 00:11:34.573 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:34.573 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.831 18:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.089 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:35.089 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:35.395 true 00:11:35.395 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:35.395 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.683 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.940 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:35.940 18:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:36.198 true 00:11:36.198 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:36.198 18:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:37.569 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.569 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:37.569 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:37.827 true 00:11:37.827 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:37.827 18:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.085 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.342 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:38.342 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:38.600 true 00:11:38.600 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:38.600 18:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.533 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.791 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:39.791 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:40.048 true 00:11:40.049 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:40.049 18:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.306 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.563 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:40.563 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:40.820 true 00:11:40.820 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:40.820 18:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.076 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.333 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:41.333 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:41.590 true 00:11:41.590 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:41.847 18:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.779 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.779 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:42.779 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:43.036 true 00:11:43.036 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:43.036 18:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.293 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.550 Initializing NVMe Controllers 00:11:43.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:43.550 Controller IO queue size 128, less than required. 00:11:43.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:43.550 Controller IO queue size 128, less than required. 00:11:43.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:43.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:43.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:43.550 Initialization complete. Launching workers. 00:11:43.550 ======================================================== 00:11:43.550 Latency(us) 00:11:43.550 Device Information : IOPS MiB/s Average min max 00:11:43.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 447.84 0.22 116231.52 3345.55 1029509.12 00:11:43.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8594.04 4.20 14895.10 3256.24 547230.44 00:11:43.550 ======================================================== 00:11:43.550 Total : 9041.89 4.41 19914.28 3256.24 1029509.12 00:11:43.550 00:11:43.550 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:43.550 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:43.808 true 00:11:43.808 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 499454 00:11:43.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (499454) - No such process 00:11:43.808 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 499454 00:11:43.808 18:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.066 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:44.323 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:44.323 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:44.323 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:44.323 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:44.323 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:44.581 null0 00:11:44.838 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:44.838 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:44.838 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:45.096 null1 00:11:45.096 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:45.096 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:45.096 18:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:45.354 null2 00:11:45.354 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:45.354 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:45.354 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:45.611 null3 00:11:45.611 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:45.612 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:45.612 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:45.869 null4 00:11:45.869 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:45.869 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:45.869 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:46.126 null5 00:11:46.126 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:46.126 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:46.126 18:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:46.385 null6 00:11:46.385 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:46.385 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:46.385 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:46.643 null7 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 503525 503526 503528 503530 503532 503534 503536 503538 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:46.643 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:46.901 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:46.901 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:46.901 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.901 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:46.901 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:46.901 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:46.901 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:46.901 18:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.159 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:47.416 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.416 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:47.416 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:47.416 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:47.416 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:47.416 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:47.416 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:47.416 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:47.674 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.674 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.674 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:47.674 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.674 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.674 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:47.933 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:48.191 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.191 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:48.191 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:48.191 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:48.191 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:48.191 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:48.191 18:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:48.191 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.449 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.450 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:48.450 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.450 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.450 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:48.450 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.450 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.450 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:48.708 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:48.708 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.708 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:48.708 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:48.708 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:48.708 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:48.708 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:48.708 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:48.966 18:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:49.224 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.224 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:49.224 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:49.224 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:49.224 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.224 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:49.224 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:49.481 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.739 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:49.740 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:49.998 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:49.998 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:49.998 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.998 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:49.998 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.998 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:49.998 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:49.998 18:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.257 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:50.514 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:50.514 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:50.514 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:50.514 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:50.515 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.515 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:50.515 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:50.515 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:50.773 18:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:51.030 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.030 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.030 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:51.030 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:51.030 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:51.030 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.030 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:51.030 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:51.595 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:51.596 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:51.853 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:51.853 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:51.853 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:51.853 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.853 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.853 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:51.853 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.853 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.111 18:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:52.367 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:52.367 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:52.367 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.367 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:52.367 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:52.367 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.367 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:52.367 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.625 rmmod nvme_tcp 00:11:52.625 rmmod nvme_fabrics 00:11:52.625 rmmod nvme_keyring 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 499029 ']' 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 499029 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 499029 ']' 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 499029 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.625 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 499029 00:11:52.883 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:52.883 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:52.883 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 499029' 00:11:52.883 killing process with pid 499029 00:11:52.883 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 499029 00:11:52.883 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 499029 00:11:53.141 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:53.141 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:53.141 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:53.141 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:11:53.141 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:53.142 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:53.142 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:53.142 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.142 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:53.142 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.142 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.142 18:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.114 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.114 00:11:55.114 real 0m47.255s 00:11:55.114 user 3m38.864s 00:11:55.114 sys 0m16.348s 00:11:55.114 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.114 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.114 ************************************ 00:11:55.114 END TEST nvmf_ns_hotplug_stress 00:11:55.114 ************************************ 00:11:55.114 18:07:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:55.114 18:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.114 18:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.114 18:07:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:55.114 ************************************ 00:11:55.114 START TEST nvmf_delete_subsystem 00:11:55.114 ************************************ 00:11:55.114 18:07:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:55.114 * Looking for test storage... 00:11:55.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.114 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:55.114 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:55.114 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:55.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.374 --rc genhtml_branch_coverage=1 00:11:55.374 --rc genhtml_function_coverage=1 00:11:55.374 --rc genhtml_legend=1 00:11:55.374 --rc geninfo_all_blocks=1 00:11:55.374 --rc geninfo_unexecuted_blocks=1 00:11:55.374 00:11:55.374 ' 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:55.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.374 --rc genhtml_branch_coverage=1 00:11:55.374 --rc genhtml_function_coverage=1 00:11:55.374 --rc genhtml_legend=1 00:11:55.374 --rc geninfo_all_blocks=1 00:11:55.374 --rc geninfo_unexecuted_blocks=1 00:11:55.374 00:11:55.374 ' 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:55.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.374 --rc genhtml_branch_coverage=1 00:11:55.374 --rc genhtml_function_coverage=1 00:11:55.374 --rc genhtml_legend=1 00:11:55.374 --rc geninfo_all_blocks=1 00:11:55.374 --rc geninfo_unexecuted_blocks=1 00:11:55.374 00:11:55.374 ' 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:55.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.374 --rc genhtml_branch_coverage=1 00:11:55.374 --rc genhtml_function_coverage=1 00:11:55.374 --rc genhtml_legend=1 00:11:55.374 --rc geninfo_all_blocks=1 00:11:55.374 --rc geninfo_unexecuted_blocks=1 00:11:55.374 00:11:55.374 ' 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.374 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.375 18:07:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.905 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:57.906 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:57.906 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:57.906 Found net devices under 0000:09:00.0: cvl_0_0 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:57.906 Found net devices under 0000:09:00.1: cvl_0_1 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:11:57.906 00:11:57.906 --- 10.0.0.2 ping statistics --- 00:11:57.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.906 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:11:57.906 00:11:57.906 --- 10.0.0.1 ping statistics --- 00:11:57.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.906 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=506431 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 506431 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 506431 ']' 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.906 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.906 [2024-11-26 18:07:45.554885] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:11:57.906 [2024-11-26 18:07:45.554992] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.907 [2024-11-26 18:07:45.626230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:57.907 [2024-11-26 18:07:45.679900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.907 [2024-11-26 18:07:45.679958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.907 [2024-11-26 18:07:45.679981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.907 [2024-11-26 18:07:45.679991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.907 [2024-11-26 18:07:45.680000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.907 [2024-11-26 18:07:45.681386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.907 [2024-11-26 18:07:45.681392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 [2024-11-26 18:07:45.829506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 [2024-11-26 18:07:45.845766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 NULL1 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 Delay0 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=506459 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:57.907 18:07:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:58.168 [2024-11-26 18:07:45.930578] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:00.065 18:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.065 18:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.065 18:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 [2024-11-26 18:07:48.012482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96860 is same with the state(6) to be set 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Write completed with error (sct=0, sc=8) 00:12:00.066 starting I/O failed: -6 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.066 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 starting I/O failed: -6 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 [2024-11-26 18:07:48.013140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5488000c40 is same with the state(6) to be set 00:12:00.067 starting I/O failed: -6 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Write completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.067 Read completed with error (sct=0, sc=8) 00:12:00.999 [2024-11-26 18:07:48.985963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e979b0 is same with the state(6) to be set 00:12:01.256 Read completed with error (sct=0, sc=8) 00:12:01.256 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 [2024-11-26 18:07:49.015200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e96680 is same with the state(6) to be set 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 [2024-11-26 18:07:49.015411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f548800d020 is same with the state(6) to be set 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 [2024-11-26 18:07:49.015567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f548800d800 is same with the state(6) to be set 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Write completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 Read completed with error (sct=0, sc=8) 00:12:01.257 [2024-11-26 18:07:49.016287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e962c0 is same with the state(6) to be set 00:12:01.257 Initializing NVMe Controllers 00:12:01.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:01.257 Controller IO queue size 128, less than required. 00:12:01.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:01.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:01.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:01.257 Initialization complete. Launching workers. 00:12:01.257 ======================================================== 00:12:01.257 Latency(us) 00:12:01.257 Device Information : IOPS MiB/s Average min max 00:12:01.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.28 0.08 906647.60 585.83 1012153.65 00:12:01.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.82 0.08 919177.54 615.29 1012294.51 00:12:01.257 ======================================================== 00:12:01.257 Total : 325.10 0.16 912807.36 585.83 1012294.51 00:12:01.257 00:12:01.257 [2024-11-26 18:07:49.016823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e979b0 (9): Bad file descriptor 00:12:01.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:01.257 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.257 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:01.257 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 506459 00:12:01.257 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:01.513 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:01.513 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 506459 00:12:01.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (506459) - No such process 00:12:01.513 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 506459 00:12:01.513 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:12:01.513 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 506459 00:12:01.513 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 506459 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.769 [2024-11-26 18:07:49.540451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=506867 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506867 00:12:01.769 18:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:01.769 [2024-11-26 18:07:49.603843] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:02.333 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:02.333 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506867 00:12:02.333 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:02.591 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:02.591 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506867 00:12:02.591 18:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:03.210 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:03.210 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506867 00:12:03.210 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:03.775 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:03.775 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506867 00:12:03.775 18:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:04.340 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:04.340 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506867 00:12:04.340 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:04.597 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:04.597 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506867 00:12:04.597 18:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:04.855 Initializing NVMe Controllers 00:12:04.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.855 Controller IO queue size 128, less than required. 00:12:04.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:04.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:04.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:04.855 Initialization complete. Launching workers. 00:12:04.855 ======================================================== 00:12:04.855 Latency(us) 00:12:04.855 Device Information : IOPS MiB/s Average min max 00:12:04.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005678.67 1000168.75 1045015.03 00:12:04.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005059.06 1000297.48 1012672.92 00:12:04.855 ======================================================== 00:12:04.855 Total : 256.00 0.12 1005368.87 1000168.75 1045015.03 00:12:04.855 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506867 00:12:05.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (506867) - No such process 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 506867 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.113 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.113 rmmod nvme_tcp 00:12:05.113 rmmod nvme_fabrics 00:12:05.113 rmmod nvme_keyring 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 506431 ']' 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 506431 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 506431 ']' 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 506431 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 506431 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 506431' 00:12:05.371 killing process with pid 506431 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 506431 00:12:05.371 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 506431 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.631 18:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.536 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.536 00:12:07.536 real 0m12.441s 00:12:07.536 user 0m27.793s 00:12:07.536 sys 0m3.016s 00:12:07.536 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.536 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.536 ************************************ 00:12:07.536 END TEST nvmf_delete_subsystem 00:12:07.536 ************************************ 00:12:07.536 18:07:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:07.536 18:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.536 18:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.536 18:07:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:07.536 ************************************ 00:12:07.536 START TEST nvmf_host_management 00:12:07.536 ************************************ 00:12:07.536 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:07.536 * Looking for test storage... 00:12:07.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:07.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.795 --rc genhtml_branch_coverage=1 00:12:07.795 --rc genhtml_function_coverage=1 00:12:07.795 --rc genhtml_legend=1 00:12:07.795 --rc geninfo_all_blocks=1 00:12:07.795 --rc geninfo_unexecuted_blocks=1 00:12:07.795 00:12:07.795 ' 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:07.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.795 --rc genhtml_branch_coverage=1 00:12:07.795 --rc genhtml_function_coverage=1 00:12:07.795 --rc genhtml_legend=1 00:12:07.795 --rc geninfo_all_blocks=1 00:12:07.795 --rc geninfo_unexecuted_blocks=1 00:12:07.795 00:12:07.795 ' 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:07.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.795 --rc genhtml_branch_coverage=1 00:12:07.795 --rc genhtml_function_coverage=1 00:12:07.795 --rc genhtml_legend=1 00:12:07.795 --rc geninfo_all_blocks=1 00:12:07.795 --rc geninfo_unexecuted_blocks=1 00:12:07.795 00:12:07.795 ' 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:07.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.795 --rc genhtml_branch_coverage=1 00:12:07.795 --rc genhtml_function_coverage=1 00:12:07.795 --rc genhtml_legend=1 00:12:07.795 --rc geninfo_all_blocks=1 00:12:07.795 --rc geninfo_unexecuted_blocks=1 00:12:07.795 00:12:07.795 ' 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.795 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.796 18:07:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:10.325 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:10.325 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:10.325 Found net devices under 0000:09:00.0: cvl_0_0 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:10.325 Found net devices under 0000:09:00.1: cvl_0_1 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:10.325 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:10.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:10.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:12:10.326 00:12:10.326 --- 10.0.0.2 ping statistics --- 00:12:10.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.326 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:10.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:10.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:12:10.326 00:12:10.326 --- 10.0.0.1 ping statistics --- 00:12:10.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:10.326 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:10.326 18:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=509348 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 509348 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 509348 ']' 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.326 [2024-11-26 18:07:58.068437] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:12:10.326 [2024-11-26 18:07:58.068536] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.326 [2024-11-26 18:07:58.140126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.326 [2024-11-26 18:07:58.195618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.326 [2024-11-26 18:07:58.195671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.326 [2024-11-26 18:07:58.195694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.326 [2024-11-26 18:07:58.195704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.326 [2024-11-26 18:07:58.195713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.326 [2024-11-26 18:07:58.197270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.326 [2024-11-26 18:07:58.197336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.326 [2024-11-26 18:07:58.197402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:10.326 [2024-11-26 18:07:58.197406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.326 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.585 [2024-11-26 18:07:58.347775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.585 Malloc0 00:12:10.585 [2024-11-26 18:07:58.423435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=509395 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 509395 /var/tmp/bdevperf.sock 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 509395 ']' 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:10.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:10.585 { 00:12:10.585 "params": { 00:12:10.585 "name": "Nvme$subsystem", 00:12:10.585 "trtype": "$TEST_TRANSPORT", 00:12:10.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:10.585 "adrfam": "ipv4", 00:12:10.585 "trsvcid": "$NVMF_PORT", 00:12:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:10.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:10.585 "hdgst": ${hdgst:-false}, 00:12:10.585 "ddgst": ${ddgst:-false} 00:12:10.585 }, 00:12:10.585 "method": "bdev_nvme_attach_controller" 00:12:10.585 } 00:12:10.585 EOF 00:12:10.585 )") 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:10.585 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:10.585 "params": { 00:12:10.585 "name": "Nvme0", 00:12:10.585 "trtype": "tcp", 00:12:10.585 "traddr": "10.0.0.2", 00:12:10.585 "adrfam": "ipv4", 00:12:10.585 "trsvcid": "4420", 00:12:10.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:10.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:10.585 "hdgst": false, 00:12:10.585 "ddgst": false 00:12:10.585 }, 00:12:10.585 "method": "bdev_nvme_attach_controller" 00:12:10.585 }' 00:12:10.585 [2024-11-26 18:07:58.508216] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:12:10.585 [2024-11-26 18:07:58.508315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509395 ] 00:12:10.585 [2024-11-26 18:07:58.578414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.842 [2024-11-26 18:07:58.638936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.100 Running I/O for 10 seconds... 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:11.100 18:07:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:11.359 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:11.360 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:11.360 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.360 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:11.360 [2024-11-26 18:07:59.270657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.360 [2024-11-26 18:07:59.270732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.360 [2024-11-26 18:07:59.270752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.360 [2024-11-26 18:07:59.270767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.360 [2024-11-26 18:07:59.270781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.360 [2024-11-26 18:07:59.270799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.360 [2024-11-26 18:07:59.270816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.360 [2024-11-26 18:07:59.270829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.360 [2024-11-26 18:07:59.270842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2ea50 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.273994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.274007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.274018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.274030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.274041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.274053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.274064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.274076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x76df10 is same with the state(6) to be set 00:12:11.360 [2024-11-26 18:07:59.274164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.360 [2024-11-26 18:07:59.274192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.360 [2024-11-26 18:07:59.274223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.360 [2024-11-26 18:07:59.274239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.360 [2024-11-26 18:07:59.274255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.360 [2024-11-26 18:07:59.274270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.360 [2024-11-26 18:07:59.274285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.360 [2024-11-26 18:07:59.274300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.360 [2024-11-26 18:07:59.274331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.361 [2024-11-26 18:07:59.274630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:11.361 [2024-11-26 18:07:59.274809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:12 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.361 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.274977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.274992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:11.361 [2024-11-26 18:07:59.275051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.361 [2024-11-26 18:07:59.275464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.361 [2024-11-26 18:07:59.275479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.275979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.275992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.276008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.276022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.276038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.276052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.276067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.276081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.276096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.276111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.276126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.276140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.276156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.276170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.276187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:11.362 [2024-11-26 18:07:59.276202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.362 [2024-11-26 18:07:59.276217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf47740 is same with the state(6) to be set 00:12:11.362 [2024-11-26 18:07:59.277513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:12:11.362 task offset: 73728 on job bdev=Nvme0n1 fails 00:12:11.362 00:12:11.362 Latency(us) 00:12:11.362 [2024-11-26T17:07:59.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:11.362 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:11.362 Job: Nvme0n1 ended in about 0.41 seconds with error 00:12:11.362 Verification LBA range: start 0x0 length 0x400 00:12:11.362 Nvme0n1 : 0.41 1413.65 88.35 157.07 0.00 39611.77 7136.14 36311.80 00:12:11.362 [2024-11-26T17:07:59.373Z] =================================================================================================================== 00:12:11.362 [2024-11-26T17:07:59.373Z] Total : 1413.65 88.35 157.07 0.00 39611.77 7136.14 36311.80 00:12:11.362 [2024-11-26 18:07:59.279638] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:11.362 [2024-11-26 18:07:59.279679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2ea50 (9): Bad file descriptor 00:12:11.362 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.362 18:07:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:11.362 [2024-11-26 18:07:59.288460] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 509395 00:12:12.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (509395) - No such process 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:12.295 { 00:12:12.295 "params": { 00:12:12.295 "name": "Nvme$subsystem", 00:12:12.295 "trtype": "$TEST_TRANSPORT", 00:12:12.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:12.295 "adrfam": "ipv4", 00:12:12.295 "trsvcid": "$NVMF_PORT", 00:12:12.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:12.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:12.295 "hdgst": ${hdgst:-false}, 00:12:12.295 "ddgst": ${ddgst:-false} 00:12:12.295 }, 00:12:12.295 "method": "bdev_nvme_attach_controller" 00:12:12.295 } 00:12:12.295 EOF 00:12:12.295 )") 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:12:12.295 18:08:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:12.295 "params": { 00:12:12.295 "name": "Nvme0", 00:12:12.295 "trtype": "tcp", 00:12:12.295 "traddr": "10.0.0.2", 00:12:12.295 "adrfam": "ipv4", 00:12:12.295 "trsvcid": "4420", 00:12:12.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:12.295 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:12.295 "hdgst": false, 00:12:12.295 "ddgst": false 00:12:12.295 }, 00:12:12.295 "method": "bdev_nvme_attach_controller" 00:12:12.295 }' 00:12:12.554 [2024-11-26 18:08:00.336058] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:12:12.554 [2024-11-26 18:08:00.336160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509676 ] 00:12:12.554 [2024-11-26 18:08:00.406147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.554 [2024-11-26 18:08:00.469349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.119 Running I/O for 1 seconds... 00:12:14.051 1536.00 IOPS, 96.00 MiB/s 00:12:14.051 Latency(us) 00:12:14.051 [2024-11-26T17:08:02.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.051 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:14.051 Verification LBA range: start 0x0 length 0x400 00:12:14.051 Nvme0n1 : 1.02 1569.08 98.07 0.00 0.00 40147.06 5922.51 36505.98 00:12:14.051 [2024-11-26T17:08:02.062Z] =================================================================================================================== 00:12:14.051 [2024-11-26T17:08:02.062Z] Total : 1569.08 98.07 0.00 0.00 40147.06 5922.51 36505.98 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.309 rmmod nvme_tcp 00:12:14.309 rmmod nvme_fabrics 00:12:14.309 rmmod nvme_keyring 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 509348 ']' 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 509348 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 509348 ']' 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 509348 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 509348 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 509348' 00:12:14.309 killing process with pid 509348 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 509348 00:12:14.309 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 509348 00:12:14.569 [2024-11-26 18:08:02.385971] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.569 18:08:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.475 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:16.476 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:16.476 00:12:16.476 real 0m8.968s 00:12:16.476 user 0m19.911s 00:12:16.476 sys 0m2.838s 00:12:16.476 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.476 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.476 ************************************ 00:12:16.476 END TEST nvmf_host_management 00:12:16.476 ************************************ 00:12:16.476 18:08:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:16.476 18:08:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.476 18:08:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:16.736 ************************************ 00:12:16.736 START TEST nvmf_lvol 00:12:16.736 ************************************ 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:16.736 * Looking for test storage... 00:12:16.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.736 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.737 --rc genhtml_branch_coverage=1 00:12:16.737 --rc genhtml_function_coverage=1 00:12:16.737 --rc genhtml_legend=1 00:12:16.737 --rc geninfo_all_blocks=1 00:12:16.737 --rc geninfo_unexecuted_blocks=1 00:12:16.737 00:12:16.737 ' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.737 --rc genhtml_branch_coverage=1 00:12:16.737 --rc genhtml_function_coverage=1 00:12:16.737 --rc genhtml_legend=1 00:12:16.737 --rc geninfo_all_blocks=1 00:12:16.737 --rc geninfo_unexecuted_blocks=1 00:12:16.737 00:12:16.737 ' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.737 --rc genhtml_branch_coverage=1 00:12:16.737 --rc genhtml_function_coverage=1 00:12:16.737 --rc genhtml_legend=1 00:12:16.737 --rc geninfo_all_blocks=1 00:12:16.737 --rc geninfo_unexecuted_blocks=1 00:12:16.737 00:12:16.737 ' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:16.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.737 --rc genhtml_branch_coverage=1 00:12:16.737 --rc genhtml_function_coverage=1 00:12:16.737 --rc genhtml_legend=1 00:12:16.737 --rc geninfo_all_blocks=1 00:12:16.737 --rc geninfo_unexecuted_blocks=1 00:12:16.737 00:12:16.737 ' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.737 18:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:19.272 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:19.272 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:19.272 Found net devices under 0000:09:00.0: cvl_0_0 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:19.272 Found net devices under 0000:09:00.1: cvl_0_1 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.272 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:12:19.273 00:12:19.273 --- 10.0.0.2 ping statistics --- 00:12:19.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.273 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:12:19.273 00:12:19.273 --- 10.0.0.1 ping statistics --- 00:12:19.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.273 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.273 18:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=511886 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 511886 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 511886 ']' 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.273 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:19.273 [2024-11-26 18:08:07.073732] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:12:19.273 [2024-11-26 18:08:07.073839] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.273 [2024-11-26 18:08:07.144530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:19.273 [2024-11-26 18:08:07.201887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.273 [2024-11-26 18:08:07.201941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.273 [2024-11-26 18:08:07.201970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.273 [2024-11-26 18:08:07.201981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.273 [2024-11-26 18:08:07.201991] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.273 [2024-11-26 18:08:07.203437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.273 [2024-11-26 18:08:07.203503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.273 [2024-11-26 18:08:07.203507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.530 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.530 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:12:19.530 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.530 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.530 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:19.530 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.530 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:19.788 [2024-11-26 18:08:07.588721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.788 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:20.046 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:20.046 18:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:20.304 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:20.304 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:20.562 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:20.820 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8c1c3f49-bb17-48cb-b0f0-6cfe7cb31fe9 00:12:20.820 18:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8c1c3f49-bb17-48cb-b0f0-6cfe7cb31fe9 lvol 20 00:12:21.077 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e8655b9a-722c-4f66-95d2-ee7d98b35271 00:12:21.077 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:21.346 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8655b9a-722c-4f66-95d2-ee7d98b35271 00:12:21.605 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:21.863 [2024-11-26 18:08:09.813048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.863 18:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:22.120 18:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=512195 00:12:22.120 18:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:22.120 18:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:23.491 18:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e8655b9a-722c-4f66-95d2-ee7d98b35271 MY_SNAPSHOT 00:12:23.491 18:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2c73ed7e-8d0f-425f-a4f6-37bbfa78d7ec 00:12:23.491 18:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e8655b9a-722c-4f66-95d2-ee7d98b35271 30 00:12:23.749 18:08:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2c73ed7e-8d0f-425f-a4f6-37bbfa78d7ec MY_CLONE 00:12:24.314 18:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d32b862e-7ecc-4eae-9353-1d0f7b9fdbe0 00:12:24.314 18:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d32b862e-7ecc-4eae-9353-1d0f7b9fdbe0 00:12:24.879 18:08:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 512195 00:12:32.981 Initializing NVMe Controllers 00:12:32.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:32.982 Controller IO queue size 128, less than required. 00:12:32.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:32.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:32.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:32.982 Initialization complete. Launching workers. 00:12:32.982 ======================================================== 00:12:32.982 Latency(us) 00:12:32.982 Device Information : IOPS MiB/s Average min max 00:12:32.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10565.40 41.27 12114.90 1405.05 68495.36 00:12:32.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10378.80 40.54 12340.41 2103.95 68561.76 00:12:32.982 ======================================================== 00:12:32.982 Total : 20944.20 81.81 12226.65 1405.05 68561.76 00:12:32.982 00:12:32.982 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:32.982 18:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8655b9a-722c-4f66-95d2-ee7d98b35271 00:12:33.239 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c1c3f49-bb17-48cb-b0f0-6cfe7cb31fe9 00:12:33.496 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:33.496 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:33.496 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:33.496 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:33.497 rmmod nvme_tcp 00:12:33.497 rmmod nvme_fabrics 00:12:33.497 rmmod nvme_keyring 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 511886 ']' 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 511886 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 511886 ']' 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 511886 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 511886 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 511886' 00:12:33.497 killing process with pid 511886 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 511886 00:12:33.497 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 511886 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.755 18:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.369 00:12:36.369 real 0m19.255s 00:12:36.369 user 1m5.783s 00:12:36.369 sys 0m5.429s 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:36.369 ************************************ 00:12:36.369 END TEST nvmf_lvol 00:12:36.369 ************************************ 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:36.369 ************************************ 00:12:36.369 START TEST nvmf_lvs_grow 00:12:36.369 ************************************ 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:36.369 * Looking for test storage... 00:12:36.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:36.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.369 --rc genhtml_branch_coverage=1 00:12:36.369 --rc genhtml_function_coverage=1 00:12:36.369 --rc genhtml_legend=1 00:12:36.369 --rc geninfo_all_blocks=1 00:12:36.369 --rc geninfo_unexecuted_blocks=1 00:12:36.369 00:12:36.369 ' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:36.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.369 --rc genhtml_branch_coverage=1 00:12:36.369 --rc genhtml_function_coverage=1 00:12:36.369 --rc genhtml_legend=1 00:12:36.369 --rc geninfo_all_blocks=1 00:12:36.369 --rc geninfo_unexecuted_blocks=1 00:12:36.369 00:12:36.369 ' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:36.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.369 --rc genhtml_branch_coverage=1 00:12:36.369 --rc genhtml_function_coverage=1 00:12:36.369 --rc genhtml_legend=1 00:12:36.369 --rc geninfo_all_blocks=1 00:12:36.369 --rc geninfo_unexecuted_blocks=1 00:12:36.369 00:12:36.369 ' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:36.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.369 --rc genhtml_branch_coverage=1 00:12:36.369 --rc genhtml_function_coverage=1 00:12:36.369 --rc genhtml_legend=1 00:12:36.369 --rc geninfo_all_blocks=1 00:12:36.369 --rc geninfo_unexecuted_blocks=1 00:12:36.369 00:12:36.369 ' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.369 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.370 18:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:38.277 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:38.277 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:38.277 Found net devices under 0000:09:00.0: cvl_0_0 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:38.277 Found net devices under 0000:09:00.1: cvl_0_1 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.277 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:38.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:12:38.278 00:12:38.278 --- 10.0.0.2 ping statistics --- 00:12:38.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.278 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:12:38.278 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:12:38.536 00:12:38.536 --- 10.0.0.1 ping statistics --- 00:12:38.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.537 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=515603 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 515603 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 515603 ']' 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.537 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:38.537 [2024-11-26 18:08:26.364489] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:12:38.537 [2024-11-26 18:08:26.364585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.537 [2024-11-26 18:08:26.435075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.537 [2024-11-26 18:08:26.489700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.537 [2024-11-26 18:08:26.489758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.537 [2024-11-26 18:08:26.489781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.537 [2024-11-26 18:08:26.489792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.537 [2024-11-26 18:08:26.489802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.537 [2024-11-26 18:08:26.490371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.795 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.795 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:38.795 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.795 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.795 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:38.795 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.795 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:39.055 [2024-11-26 18:08:26.871981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:39.055 ************************************ 00:12:39.055 START TEST lvs_grow_clean 00:12:39.055 ************************************ 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:39.055 18:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:39.313 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:39.313 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:39.571 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:39.571 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:39.571 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:39.830 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:39.830 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:39.830 18:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb lvol 150 00:12:40.088 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b7ec09dd-779b-470b-80c7-8b59db002e2b 00:12:40.088 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:40.088 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:40.346 [2024-11-26 18:08:28.297772] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:40.346 [2024-11-26 18:08:28.297854] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:40.346 true 00:12:40.346 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:40.346 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:40.604 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:40.605 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:40.862 18:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b7ec09dd-779b-470b-80c7-8b59db002e2b 00:12:41.427 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:41.427 [2024-11-26 18:08:29.389078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.427 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=516048 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 516048 /var/tmp/bdevperf.sock 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 516048 ']' 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:41.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.685 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:41.943 [2024-11-26 18:08:29.719894] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:12:41.944 [2024-11-26 18:08:29.719962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516048 ] 00:12:41.944 [2024-11-26 18:08:29.785283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.944 [2024-11-26 18:08:29.841938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.944 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.944 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:41.944 18:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:42.509 Nvme0n1 00:12:42.509 18:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:42.768 [ 00:12:42.768 { 00:12:42.768 "name": "Nvme0n1", 00:12:42.768 "aliases": [ 00:12:42.768 "b7ec09dd-779b-470b-80c7-8b59db002e2b" 00:12:42.768 ], 00:12:42.768 "product_name": "NVMe disk", 00:12:42.768 "block_size": 4096, 00:12:42.768 "num_blocks": 38912, 00:12:42.768 "uuid": "b7ec09dd-779b-470b-80c7-8b59db002e2b", 00:12:42.768 "numa_id": 0, 00:12:42.768 "assigned_rate_limits": { 00:12:42.768 "rw_ios_per_sec": 0, 00:12:42.768 "rw_mbytes_per_sec": 0, 00:12:42.768 "r_mbytes_per_sec": 0, 00:12:42.768 "w_mbytes_per_sec": 0 00:12:42.768 }, 00:12:42.768 "claimed": false, 00:12:42.768 "zoned": false, 00:12:42.768 "supported_io_types": { 00:12:42.768 "read": true, 00:12:42.768 "write": true, 00:12:42.768 "unmap": true, 00:12:42.768 "flush": true, 00:12:42.768 "reset": true, 00:12:42.768 "nvme_admin": true, 00:12:42.768 "nvme_io": true, 00:12:42.768 "nvme_io_md": false, 00:12:42.768 "write_zeroes": true, 00:12:42.768 "zcopy": false, 00:12:42.768 "get_zone_info": false, 00:12:42.768 "zone_management": false, 00:12:42.768 "zone_append": false, 00:12:42.768 "compare": true, 00:12:42.768 "compare_and_write": true, 00:12:42.768 "abort": true, 00:12:42.768 "seek_hole": false, 00:12:42.768 "seek_data": false, 00:12:42.768 "copy": true, 00:12:42.768 "nvme_iov_md": false 00:12:42.768 }, 00:12:42.768 "memory_domains": [ 00:12:42.768 { 00:12:42.768 "dma_device_id": "system", 00:12:42.768 "dma_device_type": 1 00:12:42.768 } 00:12:42.768 ], 00:12:42.768 "driver_specific": { 00:12:42.768 "nvme": [ 00:12:42.768 { 00:12:42.768 "trid": { 00:12:42.768 "trtype": "TCP", 00:12:42.768 "adrfam": "IPv4", 00:12:42.768 "traddr": "10.0.0.2", 00:12:42.768 "trsvcid": "4420", 00:12:42.768 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:42.768 }, 00:12:42.768 "ctrlr_data": { 00:12:42.768 "cntlid": 1, 00:12:42.768 "vendor_id": "0x8086", 00:12:42.768 "model_number": "SPDK bdev Controller", 00:12:42.768 "serial_number": "SPDK0", 00:12:42.768 "firmware_revision": "25.01", 00:12:42.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:42.768 "oacs": { 00:12:42.768 "security": 0, 00:12:42.768 "format": 0, 00:12:42.768 "firmware": 0, 00:12:42.768 "ns_manage": 0 00:12:42.768 }, 00:12:42.768 "multi_ctrlr": true, 00:12:42.768 "ana_reporting": false 00:12:42.768 }, 00:12:42.768 "vs": { 00:12:42.768 "nvme_version": "1.3" 00:12:42.768 }, 00:12:42.768 "ns_data": { 00:12:42.768 "id": 1, 00:12:42.768 "can_share": true 00:12:42.768 } 00:12:42.768 } 00:12:42.768 ], 00:12:42.768 "mp_policy": "active_passive" 00:12:42.768 } 00:12:42.768 } 00:12:42.768 ] 00:12:42.768 18:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=516065 00:12:42.768 18:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:42.768 18:08:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:42.768 Running I/O for 10 seconds... 00:12:43.701 Latency(us) 00:12:43.701 [2024-11-26T17:08:31.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.701 Nvme0n1 : 1.00 15021.00 58.68 0.00 0.00 0.00 0.00 0.00 00:12:43.701 [2024-11-26T17:08:31.712Z] =================================================================================================================== 00:12:43.701 [2024-11-26T17:08:31.712Z] Total : 15021.00 58.68 0.00 0.00 0.00 0.00 0.00 00:12:43.701 00:12:44.633 18:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:44.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.891 Nvme0n1 : 2.00 15226.00 59.48 0.00 0.00 0.00 0.00 0.00 00:12:44.891 [2024-11-26T17:08:32.902Z] =================================================================================================================== 00:12:44.891 [2024-11-26T17:08:32.902Z] Total : 15226.00 59.48 0.00 0.00 0.00 0.00 0.00 00:12:44.891 00:12:44.891 true 00:12:44.891 18:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:44.891 18:08:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:45.149 18:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:45.149 18:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:45.149 18:08:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 516065 00:12:45.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.719 Nvme0n1 : 3.00 15305.33 59.79 0.00 0.00 0.00 0.00 0.00 00:12:45.719 [2024-11-26T17:08:33.730Z] =================================================================================================================== 00:12:45.719 [2024-11-26T17:08:33.730Z] Total : 15305.33 59.79 0.00 0.00 0.00 0.00 0.00 00:12:45.719 00:12:47.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.093 Nvme0n1 : 4.00 15400.25 60.16 0.00 0.00 0.00 0.00 0.00 00:12:47.093 [2024-11-26T17:08:35.104Z] =================================================================================================================== 00:12:47.093 [2024-11-26T17:08:35.104Z] Total : 15400.25 60.16 0.00 0.00 0.00 0.00 0.00 00:12:47.093 00:12:48.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.027 Nvme0n1 : 5.00 15471.80 60.44 0.00 0.00 0.00 0.00 0.00 00:12:48.027 [2024-11-26T17:08:36.038Z] =================================================================================================================== 00:12:48.027 [2024-11-26T17:08:36.038Z] Total : 15471.80 60.44 0.00 0.00 0.00 0.00 0.00 00:12:48.027 00:12:48.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.961 Nvme0n1 : 6.00 15517.83 60.62 0.00 0.00 0.00 0.00 0.00 00:12:48.961 [2024-11-26T17:08:36.972Z] =================================================================================================================== 00:12:48.961 [2024-11-26T17:08:36.972Z] Total : 15517.83 60.62 0.00 0.00 0.00 0.00 0.00 00:12:48.961 00:12:49.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.896 Nvme0n1 : 7.00 15560.29 60.78 0.00 0.00 0.00 0.00 0.00 00:12:49.896 [2024-11-26T17:08:37.907Z] =================================================================================================================== 00:12:49.896 [2024-11-26T17:08:37.907Z] Total : 15560.29 60.78 0.00 0.00 0.00 0.00 0.00 00:12:49.896 00:12:50.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.828 Nvme0n1 : 8.00 15592.00 60.91 0.00 0.00 0.00 0.00 0.00 00:12:50.828 [2024-11-26T17:08:38.839Z] =================================================================================================================== 00:12:50.828 [2024-11-26T17:08:38.839Z] Total : 15592.00 60.91 0.00 0.00 0.00 0.00 0.00 00:12:50.828 00:12:51.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.759 Nvme0n1 : 9.00 15623.78 61.03 0.00 0.00 0.00 0.00 0.00 00:12:51.759 [2024-11-26T17:08:39.770Z] =================================================================================================================== 00:12:51.759 [2024-11-26T17:08:39.770Z] Total : 15623.78 61.03 0.00 0.00 0.00 0.00 0.00 00:12:51.759 00:12:53.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.132 Nvme0n1 : 10.00 15630.00 61.05 0.00 0.00 0.00 0.00 0.00 00:12:53.132 [2024-11-26T17:08:41.143Z] =================================================================================================================== 00:12:53.132 [2024-11-26T17:08:41.143Z] Total : 15630.00 61.05 0.00 0.00 0.00 0.00 0.00 00:12:53.132 00:12:53.132 00:12:53.132 Latency(us) 00:12:53.132 [2024-11-26T17:08:41.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.132 Nvme0n1 : 10.00 15635.07 61.07 0.00 0.00 8181.82 2415.12 16311.18 00:12:53.132 [2024-11-26T17:08:41.143Z] =================================================================================================================== 00:12:53.132 [2024-11-26T17:08:41.143Z] Total : 15635.07 61.07 0.00 0.00 8181.82 2415.12 16311.18 00:12:53.132 { 00:12:53.132 "results": [ 00:12:53.132 { 00:12:53.132 "job": "Nvme0n1", 00:12:53.132 "core_mask": "0x2", 00:12:53.132 "workload": "randwrite", 00:12:53.132 "status": "finished", 00:12:53.132 "queue_depth": 128, 00:12:53.132 "io_size": 4096, 00:12:53.132 "runtime": 10.004941, 00:12:53.132 "iops": 15635.074709585993, 00:12:53.132 "mibps": 61.07451058432029, 00:12:53.132 "io_failed": 0, 00:12:53.132 "io_timeout": 0, 00:12:53.132 "avg_latency_us": 8181.816296826655, 00:12:53.132 "min_latency_us": 2415.122962962963, 00:12:53.132 "max_latency_us": 16311.182222222222 00:12:53.132 } 00:12:53.132 ], 00:12:53.132 "core_count": 1 00:12:53.132 } 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 516048 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 516048 ']' 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 516048 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 516048 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 516048' 00:12:53.132 killing process with pid 516048 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 516048 00:12:53.132 Received shutdown signal, test time was about 10.000000 seconds 00:12:53.132 00:12:53.132 Latency(us) 00:12:53.132 [2024-11-26T17:08:41.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.132 [2024-11-26T17:08:41.143Z] =================================================================================================================== 00:12:53.132 [2024-11-26T17:08:41.143Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 516048 00:12:53.132 18:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:53.390 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:53.648 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:53.648 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:53.906 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:53.906 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:53.906 18:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:54.164 [2024-11-26 18:08:42.033089] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:54.164 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:54.164 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:54.165 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:54.423 request: 00:12:54.423 { 00:12:54.423 "uuid": "cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb", 00:12:54.423 "method": "bdev_lvol_get_lvstores", 00:12:54.423 "req_id": 1 00:12:54.423 } 00:12:54.423 Got JSON-RPC error response 00:12:54.423 response: 00:12:54.423 { 00:12:54.423 "code": -19, 00:12:54.423 "message": "No such device" 00:12:54.423 } 00:12:54.423 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:54.423 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:54.423 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:54.423 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:54.423 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:54.681 aio_bdev 00:12:54.681 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b7ec09dd-779b-470b-80c7-8b59db002e2b 00:12:54.681 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b7ec09dd-779b-470b-80c7-8b59db002e2b 00:12:54.681 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:54.681 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:54.681 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:54.681 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:54.681 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:54.939 18:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b7ec09dd-779b-470b-80c7-8b59db002e2b -t 2000 00:12:55.197 [ 00:12:55.197 { 00:12:55.197 "name": "b7ec09dd-779b-470b-80c7-8b59db002e2b", 00:12:55.197 "aliases": [ 00:12:55.197 "lvs/lvol" 00:12:55.197 ], 00:12:55.197 "product_name": "Logical Volume", 00:12:55.197 "block_size": 4096, 00:12:55.197 "num_blocks": 38912, 00:12:55.197 "uuid": "b7ec09dd-779b-470b-80c7-8b59db002e2b", 00:12:55.197 "assigned_rate_limits": { 00:12:55.197 "rw_ios_per_sec": 0, 00:12:55.197 "rw_mbytes_per_sec": 0, 00:12:55.197 "r_mbytes_per_sec": 0, 00:12:55.197 "w_mbytes_per_sec": 0 00:12:55.197 }, 00:12:55.197 "claimed": false, 00:12:55.197 "zoned": false, 00:12:55.197 "supported_io_types": { 00:12:55.197 "read": true, 00:12:55.197 "write": true, 00:12:55.197 "unmap": true, 00:12:55.197 "flush": false, 00:12:55.197 "reset": true, 00:12:55.197 "nvme_admin": false, 00:12:55.197 "nvme_io": false, 00:12:55.197 "nvme_io_md": false, 00:12:55.197 "write_zeroes": true, 00:12:55.197 "zcopy": false, 00:12:55.197 "get_zone_info": false, 00:12:55.197 "zone_management": false, 00:12:55.197 "zone_append": false, 00:12:55.197 "compare": false, 00:12:55.197 "compare_and_write": false, 00:12:55.197 "abort": false, 00:12:55.197 "seek_hole": true, 00:12:55.197 "seek_data": true, 00:12:55.197 "copy": false, 00:12:55.197 "nvme_iov_md": false 00:12:55.197 }, 00:12:55.197 "driver_specific": { 00:12:55.197 "lvol": { 00:12:55.197 "lvol_store_uuid": "cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb", 00:12:55.197 "base_bdev": "aio_bdev", 00:12:55.197 "thin_provision": false, 00:12:55.197 "num_allocated_clusters": 38, 00:12:55.197 "snapshot": false, 00:12:55.197 "clone": false, 00:12:55.197 "esnap_clone": false 00:12:55.197 } 00:12:55.197 } 00:12:55.197 } 00:12:55.197 ] 00:12:55.197 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:55.197 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:55.197 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:55.456 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:55.456 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:55.456 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:55.716 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:55.716 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b7ec09dd-779b-470b-80c7-8b59db002e2b 00:12:55.974 18:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cefdd9cb-56db-4b90-8a1c-c8f4574a8cfb 00:12:56.541 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:56.541 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:56.799 00:12:56.799 real 0m17.643s 00:12:56.799 user 0m17.264s 00:12:56.799 sys 0m1.778s 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:56.799 ************************************ 00:12:56.799 END TEST lvs_grow_clean 00:12:56.799 ************************************ 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:56.799 ************************************ 00:12:56.799 START TEST lvs_grow_dirty 00:12:56.799 ************************************ 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:56.799 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:56.800 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:56.800 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:57.057 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:57.057 18:08:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:57.315 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:12:57.315 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:12:57.315 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:57.573 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:57.573 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:57.573 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 lvol 150 00:12:57.831 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d6d835e1-556e-4637-a30f-3660b5a0933d 00:12:57.831 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:57.831 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:58.087 [2024-11-26 18:08:45.980757] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:58.087 [2024-11-26 18:08:45.980842] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:58.087 true 00:12:58.087 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:12:58.087 18:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:58.344 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:58.344 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:58.601 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d6d835e1-556e-4637-a30f-3660b5a0933d 00:12:58.858 18:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:59.114 [2024-11-26 18:08:47.068001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.114 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=518115 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 518115 /var/tmp/bdevperf.sock 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 518115 ']' 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:59.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.406 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:59.406 [2024-11-26 18:08:47.397820] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:12:59.406 [2024-11-26 18:08:47.397888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid518115 ] 00:12:59.662 [2024-11-26 18:08:47.463588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.662 [2024-11-26 18:08:47.520372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.662 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.662 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:59.662 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:00.226 Nvme0n1 00:13:00.226 18:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:00.226 [ 00:13:00.226 { 00:13:00.226 "name": "Nvme0n1", 00:13:00.226 "aliases": [ 00:13:00.226 "d6d835e1-556e-4637-a30f-3660b5a0933d" 00:13:00.226 ], 00:13:00.226 "product_name": "NVMe disk", 00:13:00.226 "block_size": 4096, 00:13:00.226 "num_blocks": 38912, 00:13:00.226 "uuid": "d6d835e1-556e-4637-a30f-3660b5a0933d", 00:13:00.226 "numa_id": 0, 00:13:00.226 "assigned_rate_limits": { 00:13:00.226 "rw_ios_per_sec": 0, 00:13:00.226 "rw_mbytes_per_sec": 0, 00:13:00.226 "r_mbytes_per_sec": 0, 00:13:00.226 "w_mbytes_per_sec": 0 00:13:00.226 }, 00:13:00.226 "claimed": false, 00:13:00.226 "zoned": false, 00:13:00.226 "supported_io_types": { 00:13:00.226 "read": true, 00:13:00.226 "write": true, 00:13:00.226 "unmap": true, 00:13:00.226 "flush": true, 00:13:00.226 "reset": true, 00:13:00.226 "nvme_admin": true, 00:13:00.226 "nvme_io": true, 00:13:00.226 "nvme_io_md": false, 00:13:00.226 "write_zeroes": true, 00:13:00.226 "zcopy": false, 00:13:00.226 "get_zone_info": false, 00:13:00.226 "zone_management": false, 00:13:00.226 "zone_append": false, 00:13:00.226 "compare": true, 00:13:00.227 "compare_and_write": true, 00:13:00.227 "abort": true, 00:13:00.227 "seek_hole": false, 00:13:00.227 "seek_data": false, 00:13:00.227 "copy": true, 00:13:00.227 "nvme_iov_md": false 00:13:00.227 }, 00:13:00.227 "memory_domains": [ 00:13:00.227 { 00:13:00.227 "dma_device_id": "system", 00:13:00.227 "dma_device_type": 1 00:13:00.227 } 00:13:00.227 ], 00:13:00.227 "driver_specific": { 00:13:00.227 "nvme": [ 00:13:00.227 { 00:13:00.227 "trid": { 00:13:00.227 "trtype": "TCP", 00:13:00.227 "adrfam": "IPv4", 00:13:00.227 "traddr": "10.0.0.2", 00:13:00.227 "trsvcid": "4420", 00:13:00.227 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:00.227 }, 00:13:00.227 "ctrlr_data": { 00:13:00.227 "cntlid": 1, 00:13:00.227 "vendor_id": "0x8086", 00:13:00.227 "model_number": "SPDK bdev Controller", 00:13:00.227 "serial_number": "SPDK0", 00:13:00.227 "firmware_revision": "25.01", 00:13:00.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:00.227 "oacs": { 00:13:00.227 "security": 0, 00:13:00.227 "format": 0, 00:13:00.227 "firmware": 0, 00:13:00.227 "ns_manage": 0 00:13:00.227 }, 00:13:00.227 "multi_ctrlr": true, 00:13:00.227 "ana_reporting": false 00:13:00.227 }, 00:13:00.227 "vs": { 00:13:00.227 "nvme_version": "1.3" 00:13:00.227 }, 00:13:00.227 "ns_data": { 00:13:00.227 "id": 1, 00:13:00.227 "can_share": true 00:13:00.227 } 00:13:00.227 } 00:13:00.227 ], 00:13:00.227 "mp_policy": "active_passive" 00:13:00.227 } 00:13:00.227 } 00:13:00.227 ] 00:13:00.484 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=518251 00:13:00.484 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:00.484 18:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:00.484 Running I/O for 10 seconds... 00:13:01.470 Latency(us) 00:13:01.470 [2024-11-26T17:08:49.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:01.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.470 Nvme0n1 : 1.00 14606.00 57.05 0.00 0.00 0.00 0.00 0.00 00:13:01.470 [2024-11-26T17:08:49.481Z] =================================================================================================================== 00:13:01.470 [2024-11-26T17:08:49.481Z] Total : 14606.00 57.05 0.00 0.00 0.00 0.00 0.00 00:13:01.470 00:13:02.423 18:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:02.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.423 Nvme0n1 : 2.00 14732.50 57.55 0.00 0.00 0.00 0.00 0.00 00:13:02.423 [2024-11-26T17:08:50.434Z] =================================================================================================================== 00:13:02.423 [2024-11-26T17:08:50.434Z] Total : 14732.50 57.55 0.00 0.00 0.00 0.00 0.00 00:13:02.423 00:13:02.682 true 00:13:02.682 18:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:02.682 18:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:02.940 18:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:02.940 18:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:02.940 18:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 518251 00:13:03.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.506 Nvme0n1 : 3.00 14774.67 57.71 0.00 0.00 0.00 0.00 0.00 00:13:03.506 [2024-11-26T17:08:51.517Z] =================================================================================================================== 00:13:03.506 [2024-11-26T17:08:51.517Z] Total : 14774.67 57.71 0.00 0.00 0.00 0.00 0.00 00:13:03.506 00:13:04.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:04.440 Nvme0n1 : 4.00 14876.25 58.11 0.00 0.00 0.00 0.00 0.00 00:13:04.440 [2024-11-26T17:08:52.451Z] =================================================================================================================== 00:13:04.440 [2024-11-26T17:08:52.451Z] Total : 14876.25 58.11 0.00 0.00 0.00 0.00 0.00 00:13:04.440 00:13:05.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.374 Nvme0n1 : 5.00 14962.00 58.45 0.00 0.00 0.00 0.00 0.00 00:13:05.374 [2024-11-26T17:08:53.385Z] =================================================================================================================== 00:13:05.374 [2024-11-26T17:08:53.385Z] Total : 14962.00 58.45 0.00 0.00 0.00 0.00 0.00 00:13:05.374 00:13:06.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.747 Nvme0n1 : 6.00 15019.67 58.67 0.00 0.00 0.00 0.00 0.00 00:13:06.747 [2024-11-26T17:08:54.758Z] =================================================================================================================== 00:13:06.747 [2024-11-26T17:08:54.758Z] Total : 15019.67 58.67 0.00 0.00 0.00 0.00 0.00 00:13:06.747 00:13:07.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:07.681 Nvme0n1 : 7.00 15078.43 58.90 0.00 0.00 0.00 0.00 0.00 00:13:07.681 [2024-11-26T17:08:55.692Z] =================================================================================================================== 00:13:07.681 [2024-11-26T17:08:55.692Z] Total : 15078.43 58.90 0.00 0.00 0.00 0.00 0.00 00:13:07.681 00:13:08.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.613 Nvme0n1 : 8.00 15138.25 59.13 0.00 0.00 0.00 0.00 0.00 00:13:08.613 [2024-11-26T17:08:56.624Z] =================================================================================================================== 00:13:08.613 [2024-11-26T17:08:56.624Z] Total : 15138.25 59.13 0.00 0.00 0.00 0.00 0.00 00:13:08.613 00:13:09.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.547 Nvme0n1 : 9.00 15177.78 59.29 0.00 0.00 0.00 0.00 0.00 00:13:09.547 [2024-11-26T17:08:57.558Z] =================================================================================================================== 00:13:09.547 [2024-11-26T17:08:57.558Z] Total : 15177.78 59.29 0.00 0.00 0.00 0.00 0.00 00:13:09.547 00:13:10.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.481 Nvme0n1 : 10.00 15215.80 59.44 0.00 0.00 0.00 0.00 0.00 00:13:10.481 [2024-11-26T17:08:58.492Z] =================================================================================================================== 00:13:10.481 [2024-11-26T17:08:58.492Z] Total : 15215.80 59.44 0.00 0.00 0.00 0.00 0.00 00:13:10.481 00:13:10.481 00:13:10.481 Latency(us) 00:13:10.481 [2024-11-26T17:08:58.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.481 Nvme0n1 : 10.00 15215.05 59.43 0.00 0.00 8407.14 2269.49 15922.82 00:13:10.481 [2024-11-26T17:08:58.492Z] =================================================================================================================== 00:13:10.481 [2024-11-26T17:08:58.492Z] Total : 15215.05 59.43 0.00 0.00 8407.14 2269.49 15922.82 00:13:10.481 { 00:13:10.481 "results": [ 00:13:10.481 { 00:13:10.481 "job": "Nvme0n1", 00:13:10.481 "core_mask": "0x2", 00:13:10.481 "workload": "randwrite", 00:13:10.481 "status": "finished", 00:13:10.481 "queue_depth": 128, 00:13:10.481 "io_size": 4096, 00:13:10.481 "runtime": 10.004701, 00:13:10.481 "iops": 15215.047406214338, 00:13:10.481 "mibps": 59.43377893052476, 00:13:10.481 "io_failed": 0, 00:13:10.481 "io_timeout": 0, 00:13:10.481 "avg_latency_us": 8407.138857253807, 00:13:10.481 "min_latency_us": 2269.4874074074073, 00:13:10.481 "max_latency_us": 15922.82074074074 00:13:10.481 } 00:13:10.481 ], 00:13:10.481 "core_count": 1 00:13:10.481 } 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 518115 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 518115 ']' 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 518115 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 518115 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 518115' 00:13:10.481 killing process with pid 518115 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 518115 00:13:10.481 Received shutdown signal, test time was about 10.000000 seconds 00:13:10.481 00:13:10.481 Latency(us) 00:13:10.481 [2024-11-26T17:08:58.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.481 [2024-11-26T17:08:58.492Z] =================================================================================================================== 00:13:10.481 [2024-11-26T17:08:58.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:10.481 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 518115 00:13:10.739 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:10.997 18:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:11.255 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:11.255 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 515603 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 515603 00:13:11.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 515603 Killed "${NVMF_APP[@]}" "$@" 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=519586 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 519586 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 519586 ']' 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.512 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:11.512 [2024-11-26 18:08:59.512967] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:11.512 [2024-11-26 18:08:59.513033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.770 [2024-11-26 18:08:59.584772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.770 [2024-11-26 18:08:59.639730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.770 [2024-11-26 18:08:59.639787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.770 [2024-11-26 18:08:59.639809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.770 [2024-11-26 18:08:59.639819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.770 [2024-11-26 18:08:59.639830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.770 [2024-11-26 18:08:59.640398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.770 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.770 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:11.770 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:11.770 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.770 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:11.770 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.770 18:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:12.028 [2024-11-26 18:09:00.033011] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:12.028 [2024-11-26 18:09:00.033150] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:12.028 [2024-11-26 18:09:00.033198] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:12.287 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:12.287 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d6d835e1-556e-4637-a30f-3660b5a0933d 00:13:12.287 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d6d835e1-556e-4637-a30f-3660b5a0933d 00:13:12.287 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:12.287 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:12.287 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:12.287 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:12.287 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:12.546 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d6d835e1-556e-4637-a30f-3660b5a0933d -t 2000 00:13:12.804 [ 00:13:12.804 { 00:13:12.804 "name": "d6d835e1-556e-4637-a30f-3660b5a0933d", 00:13:12.804 "aliases": [ 00:13:12.804 "lvs/lvol" 00:13:12.804 ], 00:13:12.804 "product_name": "Logical Volume", 00:13:12.804 "block_size": 4096, 00:13:12.804 "num_blocks": 38912, 00:13:12.804 "uuid": "d6d835e1-556e-4637-a30f-3660b5a0933d", 00:13:12.804 "assigned_rate_limits": { 00:13:12.804 "rw_ios_per_sec": 0, 00:13:12.804 "rw_mbytes_per_sec": 0, 00:13:12.804 "r_mbytes_per_sec": 0, 00:13:12.804 "w_mbytes_per_sec": 0 00:13:12.804 }, 00:13:12.804 "claimed": false, 00:13:12.804 "zoned": false, 00:13:12.804 "supported_io_types": { 00:13:12.804 "read": true, 00:13:12.804 "write": true, 00:13:12.804 "unmap": true, 00:13:12.804 "flush": false, 00:13:12.804 "reset": true, 00:13:12.804 "nvme_admin": false, 00:13:12.804 "nvme_io": false, 00:13:12.804 "nvme_io_md": false, 00:13:12.804 "write_zeroes": true, 00:13:12.804 "zcopy": false, 00:13:12.804 "get_zone_info": false, 00:13:12.804 "zone_management": false, 00:13:12.804 "zone_append": false, 00:13:12.804 "compare": false, 00:13:12.804 "compare_and_write": false, 00:13:12.804 "abort": false, 00:13:12.804 "seek_hole": true, 00:13:12.804 "seek_data": true, 00:13:12.805 "copy": false, 00:13:12.805 "nvme_iov_md": false 00:13:12.805 }, 00:13:12.805 "driver_specific": { 00:13:12.805 "lvol": { 00:13:12.805 "lvol_store_uuid": "9b0d5b03-ab99-42fb-b529-3366fe412ee2", 00:13:12.805 "base_bdev": "aio_bdev", 00:13:12.805 "thin_provision": false, 00:13:12.805 "num_allocated_clusters": 38, 00:13:12.805 "snapshot": false, 00:13:12.805 "clone": false, 00:13:12.805 "esnap_clone": false 00:13:12.805 } 00:13:12.805 } 00:13:12.805 } 00:13:12.805 ] 00:13:12.805 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:12.805 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:12.805 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:13.063 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:13.063 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:13.063 18:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:13.320 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:13.320 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:13.579 [2024-11-26 18:09:01.438559] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:13.579 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:13.837 request: 00:13:13.837 { 00:13:13.837 "uuid": "9b0d5b03-ab99-42fb-b529-3366fe412ee2", 00:13:13.837 "method": "bdev_lvol_get_lvstores", 00:13:13.837 "req_id": 1 00:13:13.837 } 00:13:13.837 Got JSON-RPC error response 00:13:13.837 response: 00:13:13.837 { 00:13:13.837 "code": -19, 00:13:13.837 "message": "No such device" 00:13:13.837 } 00:13:13.837 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:13:13.837 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.837 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.837 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.837 18:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:14.095 aio_bdev 00:13:14.095 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d6d835e1-556e-4637-a30f-3660b5a0933d 00:13:14.095 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d6d835e1-556e-4637-a30f-3660b5a0933d 00:13:14.095 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.095 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:14.095 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.095 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.095 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:14.353 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d6d835e1-556e-4637-a30f-3660b5a0933d -t 2000 00:13:14.611 [ 00:13:14.611 { 00:13:14.611 "name": "d6d835e1-556e-4637-a30f-3660b5a0933d", 00:13:14.611 "aliases": [ 00:13:14.611 "lvs/lvol" 00:13:14.611 ], 00:13:14.611 "product_name": "Logical Volume", 00:13:14.611 "block_size": 4096, 00:13:14.611 "num_blocks": 38912, 00:13:14.611 "uuid": "d6d835e1-556e-4637-a30f-3660b5a0933d", 00:13:14.611 "assigned_rate_limits": { 00:13:14.611 "rw_ios_per_sec": 0, 00:13:14.611 "rw_mbytes_per_sec": 0, 00:13:14.611 "r_mbytes_per_sec": 0, 00:13:14.611 "w_mbytes_per_sec": 0 00:13:14.611 }, 00:13:14.611 "claimed": false, 00:13:14.611 "zoned": false, 00:13:14.611 "supported_io_types": { 00:13:14.611 "read": true, 00:13:14.611 "write": true, 00:13:14.611 "unmap": true, 00:13:14.611 "flush": false, 00:13:14.611 "reset": true, 00:13:14.611 "nvme_admin": false, 00:13:14.611 "nvme_io": false, 00:13:14.611 "nvme_io_md": false, 00:13:14.611 "write_zeroes": true, 00:13:14.611 "zcopy": false, 00:13:14.611 "get_zone_info": false, 00:13:14.611 "zone_management": false, 00:13:14.611 "zone_append": false, 00:13:14.611 "compare": false, 00:13:14.611 "compare_and_write": false, 00:13:14.611 "abort": false, 00:13:14.611 "seek_hole": true, 00:13:14.611 "seek_data": true, 00:13:14.611 "copy": false, 00:13:14.611 "nvme_iov_md": false 00:13:14.611 }, 00:13:14.611 "driver_specific": { 00:13:14.611 "lvol": { 00:13:14.611 "lvol_store_uuid": "9b0d5b03-ab99-42fb-b529-3366fe412ee2", 00:13:14.611 "base_bdev": "aio_bdev", 00:13:14.611 "thin_provision": false, 00:13:14.611 "num_allocated_clusters": 38, 00:13:14.611 "snapshot": false, 00:13:14.611 "clone": false, 00:13:14.611 "esnap_clone": false 00:13:14.611 } 00:13:14.611 } 00:13:14.611 } 00:13:14.611 ] 00:13:14.611 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:14.611 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:14.611 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:14.869 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:14.869 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:14.869 18:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:15.126 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:15.126 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6d835e1-556e-4637-a30f-3660b5a0933d 00:13:15.384 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b0d5b03-ab99-42fb-b529-3366fe412ee2 00:13:15.951 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:15.951 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:16.209 00:13:16.209 real 0m19.352s 00:13:16.209 user 0m48.944s 00:13:16.209 sys 0m4.621s 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:16.209 ************************************ 00:13:16.209 END TEST lvs_grow_dirty 00:13:16.209 ************************************ 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:16.209 18:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:16.209 nvmf_trace.0 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:16.209 rmmod nvme_tcp 00:13:16.209 rmmod nvme_fabrics 00:13:16.209 rmmod nvme_keyring 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 519586 ']' 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 519586 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 519586 ']' 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 519586 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 519586 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 519586' 00:13:16.209 killing process with pid 519586 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 519586 00:13:16.209 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 519586 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:16.468 18:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.377 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:18.377 00:13:18.377 real 0m42.567s 00:13:18.377 user 1m12.300s 00:13:18.377 sys 0m8.455s 00:13:18.377 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.377 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:18.377 ************************************ 00:13:18.377 END TEST nvmf_lvs_grow 00:13:18.377 ************************************ 00:13:18.636 18:09:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:18.636 18:09:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.636 18:09:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.636 18:09:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:18.636 ************************************ 00:13:18.636 START TEST nvmf_bdev_io_wait 00:13:18.636 ************************************ 00:13:18.636 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:18.637 * Looking for test storage... 00:13:18.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:18.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.637 --rc genhtml_branch_coverage=1 00:13:18.637 --rc genhtml_function_coverage=1 00:13:18.637 --rc genhtml_legend=1 00:13:18.637 --rc geninfo_all_blocks=1 00:13:18.637 --rc geninfo_unexecuted_blocks=1 00:13:18.637 00:13:18.637 ' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:18.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.637 --rc genhtml_branch_coverage=1 00:13:18.637 --rc genhtml_function_coverage=1 00:13:18.637 --rc genhtml_legend=1 00:13:18.637 --rc geninfo_all_blocks=1 00:13:18.637 --rc geninfo_unexecuted_blocks=1 00:13:18.637 00:13:18.637 ' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:18.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.637 --rc genhtml_branch_coverage=1 00:13:18.637 --rc genhtml_function_coverage=1 00:13:18.637 --rc genhtml_legend=1 00:13:18.637 --rc geninfo_all_blocks=1 00:13:18.637 --rc geninfo_unexecuted_blocks=1 00:13:18.637 00:13:18.637 ' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:18.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.637 --rc genhtml_branch_coverage=1 00:13:18.637 --rc genhtml_function_coverage=1 00:13:18.637 --rc genhtml_legend=1 00:13:18.637 --rc geninfo_all_blocks=1 00:13:18.637 --rc geninfo_unexecuted_blocks=1 00:13:18.637 00:13:18.637 ' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.637 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:13:18.638 18:09:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:21.170 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:21.170 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:21.170 Found net devices under 0000:09:00.0: cvl_0_0 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:21.170 Found net devices under 0000:09:00.1: cvl_0_1 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.170 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:13:21.171 00:13:21.171 --- 10.0.0.2 ping statistics --- 00:13:21.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.171 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:13:21.171 00:13:21.171 --- 10.0.0.1 ping statistics --- 00:13:21.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.171 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=522703 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 522703 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 522703 ']' 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:21.171 18:09:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.171 [2024-11-26 18:09:09.015983] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:21.171 [2024-11-26 18:09:09.016061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.171 [2024-11-26 18:09:09.093372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.171 [2024-11-26 18:09:09.155286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.171 [2024-11-26 18:09:09.155367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.171 [2024-11-26 18:09:09.155396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.171 [2024-11-26 18:09:09.155407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.171 [2024-11-26 18:09:09.155417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.171 [2024-11-26 18:09:09.157906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.171 [2024-11-26 18:09:09.158018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.171 [2024-11-26 18:09:09.158097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.171 [2024-11-26 18:09:09.158100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.430 [2024-11-26 18:09:09.354040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.430 Malloc0 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:21.430 [2024-11-26 18:09:09.405338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=522894 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:21.430 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=522896 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:21.431 { 00:13:21.431 "params": { 00:13:21.431 "name": "Nvme$subsystem", 00:13:21.431 "trtype": "$TEST_TRANSPORT", 00:13:21.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:21.431 "adrfam": "ipv4", 00:13:21.431 "trsvcid": "$NVMF_PORT", 00:13:21.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:21.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:21.431 "hdgst": ${hdgst:-false}, 00:13:21.431 "ddgst": ${ddgst:-false} 00:13:21.431 }, 00:13:21.431 "method": "bdev_nvme_attach_controller" 00:13:21.431 } 00:13:21.431 EOF 00:13:21.431 )") 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=522898 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:21.431 { 00:13:21.431 "params": { 00:13:21.431 "name": "Nvme$subsystem", 00:13:21.431 "trtype": "$TEST_TRANSPORT", 00:13:21.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:21.431 "adrfam": "ipv4", 00:13:21.431 "trsvcid": "$NVMF_PORT", 00:13:21.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:21.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:21.431 "hdgst": ${hdgst:-false}, 00:13:21.431 "ddgst": ${ddgst:-false} 00:13:21.431 }, 00:13:21.431 "method": "bdev_nvme_attach_controller" 00:13:21.431 } 00:13:21.431 EOF 00:13:21.431 )") 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=522901 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:21.431 { 00:13:21.431 "params": { 00:13:21.431 "name": "Nvme$subsystem", 00:13:21.431 "trtype": "$TEST_TRANSPORT", 00:13:21.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:21.431 "adrfam": "ipv4", 00:13:21.431 "trsvcid": "$NVMF_PORT", 00:13:21.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:21.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:21.431 "hdgst": ${hdgst:-false}, 00:13:21.431 "ddgst": ${ddgst:-false} 00:13:21.431 }, 00:13:21.431 "method": "bdev_nvme_attach_controller" 00:13:21.431 } 00:13:21.431 EOF 00:13:21.431 )") 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:21.431 { 00:13:21.431 "params": { 00:13:21.431 "name": "Nvme$subsystem", 00:13:21.431 "trtype": "$TEST_TRANSPORT", 00:13:21.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:21.431 "adrfam": "ipv4", 00:13:21.431 "trsvcid": "$NVMF_PORT", 00:13:21.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:21.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:21.431 "hdgst": ${hdgst:-false}, 00:13:21.431 "ddgst": ${ddgst:-false} 00:13:21.431 }, 00:13:21.431 "method": "bdev_nvme_attach_controller" 00:13:21.431 } 00:13:21.431 EOF 00:13:21.431 )") 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 522894 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:21.431 "params": { 00:13:21.431 "name": "Nvme1", 00:13:21.431 "trtype": "tcp", 00:13:21.431 "traddr": "10.0.0.2", 00:13:21.431 "adrfam": "ipv4", 00:13:21.431 "trsvcid": "4420", 00:13:21.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.431 "hdgst": false, 00:13:21.431 "ddgst": false 00:13:21.431 }, 00:13:21.431 "method": "bdev_nvme_attach_controller" 00:13:21.431 }' 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:21.431 "params": { 00:13:21.431 "name": "Nvme1", 00:13:21.431 "trtype": "tcp", 00:13:21.431 "traddr": "10.0.0.2", 00:13:21.431 "adrfam": "ipv4", 00:13:21.431 "trsvcid": "4420", 00:13:21.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.431 "hdgst": false, 00:13:21.431 "ddgst": false 00:13:21.431 }, 00:13:21.431 "method": "bdev_nvme_attach_controller" 00:13:21.431 }' 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:21.431 "params": { 00:13:21.431 "name": "Nvme1", 00:13:21.431 "trtype": "tcp", 00:13:21.431 "traddr": "10.0.0.2", 00:13:21.431 "adrfam": "ipv4", 00:13:21.431 "trsvcid": "4420", 00:13:21.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.431 "hdgst": false, 00:13:21.431 "ddgst": false 00:13:21.431 }, 00:13:21.431 "method": "bdev_nvme_attach_controller" 00:13:21.431 }' 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:21.431 18:09:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:21.431 "params": { 00:13:21.431 "name": "Nvme1", 00:13:21.431 "trtype": "tcp", 00:13:21.431 "traddr": "10.0.0.2", 00:13:21.431 "adrfam": "ipv4", 00:13:21.431 "trsvcid": "4420", 00:13:21.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:21.431 "hdgst": false, 00:13:21.431 "ddgst": false 00:13:21.431 }, 00:13:21.431 "method": "bdev_nvme_attach_controller" 00:13:21.431 }' 00:13:21.688 [2024-11-26 18:09:09.455500] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:21.688 [2024-11-26 18:09:09.455503] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:21.688 [2024-11-26 18:09:09.455497] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:21.689 [2024-11-26 18:09:09.455608] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 18:09:09.455609] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 18:09:09.455609] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:21.689 --proc-type=auto ] 00:13:21.689 --proc-type=auto ] 00:13:21.689 [2024-11-26 18:09:09.456156] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:21.689 [2024-11-26 18:09:09.456227] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:21.689 [2024-11-26 18:09:09.648177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.947 [2024-11-26 18:09:09.703972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:21.947 [2024-11-26 18:09:09.750598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.947 [2024-11-26 18:09:09.806538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:21.947 [2024-11-26 18:09:09.853169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.947 [2024-11-26 18:09:09.911101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:21.947 [2024-11-26 18:09:09.932680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.204 [2024-11-26 18:09:09.985627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:22.204 Running I/O for 1 seconds... 00:13:22.204 Running I/O for 1 seconds... 00:13:22.204 Running I/O for 1 seconds... 00:13:22.204 Running I/O for 1 seconds... 00:13:23.137 9903.00 IOPS, 38.68 MiB/s 00:13:23.137 Latency(us) 00:13:23.137 [2024-11-26T17:09:11.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.137 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:23.137 Nvme1n1 : 1.01 9959.52 38.90 0.00 0.00 12796.63 6456.51 19709.35 00:13:23.137 [2024-11-26T17:09:11.148Z] =================================================================================================================== 00:13:23.137 [2024-11-26T17:09:11.148Z] Total : 9959.52 38.90 0.00 0.00 12796.63 6456.51 19709.35 00:13:23.396 8096.00 IOPS, 31.62 MiB/s 00:13:23.396 Latency(us) 00:13:23.396 [2024-11-26T17:09:11.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.396 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:23.396 Nvme1n1 : 1.01 8148.43 31.83 0.00 0.00 15628.33 8155.59 25243.50 00:13:23.396 [2024-11-26T17:09:11.407Z] =================================================================================================================== 00:13:23.396 [2024-11-26T17:09:11.407Z] Total : 8148.43 31.83 0.00 0.00 15628.33 8155.59 25243.50 00:13:23.396 186216.00 IOPS, 727.41 MiB/s 00:13:23.396 Latency(us) 00:13:23.396 [2024-11-26T17:09:11.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.396 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:23.396 Nvme1n1 : 1.00 185858.97 726.01 0.00 0.00 684.87 304.92 1881.13 00:13:23.396 [2024-11-26T17:09:11.407Z] =================================================================================================================== 00:13:23.396 [2024-11-26T17:09:11.407Z] Total : 185858.97 726.01 0.00 0.00 684.87 304.92 1881.13 00:13:23.396 9308.00 IOPS, 36.36 MiB/s 00:13:23.396 Latency(us) 00:13:23.396 [2024-11-26T17:09:11.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:23.396 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:23.396 Nvme1n1 : 1.01 9381.78 36.65 0.00 0.00 13593.82 5024.43 26991.12 00:13:23.396 [2024-11-26T17:09:11.407Z] =================================================================================================================== 00:13:23.396 [2024-11-26T17:09:11.407Z] Total : 9381.78 36.65 0.00 0.00 13593.82 5024.43 26991.12 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 522896 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 522898 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 522901 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.396 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.396 rmmod nvme_tcp 00:13:23.654 rmmod nvme_fabrics 00:13:23.654 rmmod nvme_keyring 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 522703 ']' 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 522703 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 522703 ']' 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 522703 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 522703 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 522703' 00:13:23.654 killing process with pid 522703 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 522703 00:13:23.654 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 522703 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.913 18:09:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.818 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:25.818 00:13:25.818 real 0m7.316s 00:13:25.818 user 0m15.915s 00:13:25.818 sys 0m3.709s 00:13:25.818 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.818 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:25.818 ************************************ 00:13:25.818 END TEST nvmf_bdev_io_wait 00:13:25.818 ************************************ 00:13:25.818 18:09:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:25.818 18:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:25.818 18:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.818 18:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:25.818 ************************************ 00:13:25.818 START TEST nvmf_queue_depth 00:13:25.818 ************************************ 00:13:25.818 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:26.078 * Looking for test storage... 00:13:26.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.078 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:26.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.078 --rc genhtml_branch_coverage=1 00:13:26.078 --rc genhtml_function_coverage=1 00:13:26.078 --rc genhtml_legend=1 00:13:26.078 --rc geninfo_all_blocks=1 00:13:26.079 --rc geninfo_unexecuted_blocks=1 00:13:26.079 00:13:26.079 ' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:26.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.079 --rc genhtml_branch_coverage=1 00:13:26.079 --rc genhtml_function_coverage=1 00:13:26.079 --rc genhtml_legend=1 00:13:26.079 --rc geninfo_all_blocks=1 00:13:26.079 --rc geninfo_unexecuted_blocks=1 00:13:26.079 00:13:26.079 ' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:26.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.079 --rc genhtml_branch_coverage=1 00:13:26.079 --rc genhtml_function_coverage=1 00:13:26.079 --rc genhtml_legend=1 00:13:26.079 --rc geninfo_all_blocks=1 00:13:26.079 --rc geninfo_unexecuted_blocks=1 00:13:26.079 00:13:26.079 ' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:26.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.079 --rc genhtml_branch_coverage=1 00:13:26.079 --rc genhtml_function_coverage=1 00:13:26.079 --rc genhtml_legend=1 00:13:26.079 --rc geninfo_all_blocks=1 00:13:26.079 --rc geninfo_unexecuted_blocks=1 00:13:26.079 00:13:26.079 ' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.079 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:26.080 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:26.080 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.080 18:09:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.654 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:28.655 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:28.655 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:28.655 Found net devices under 0000:09:00.0: cvl_0_0 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:28.655 Found net devices under 0000:09:00.1: cvl_0_1 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:28.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:13:28.655 00:13:28.655 --- 10.0.0.2 ping statistics --- 00:13:28.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.655 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:13:28.655 00:13:28.655 --- 10.0.0.1 ping statistics --- 00:13:28.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.655 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=525131 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 525131 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 525131 ']' 00:13:28.655 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.656 [2024-11-26 18:09:16.369932] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:28.656 [2024-11-26 18:09:16.370010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.656 [2024-11-26 18:09:16.447178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.656 [2024-11-26 18:09:16.505751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.656 [2024-11-26 18:09:16.505806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.656 [2024-11-26 18:09:16.505819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.656 [2024-11-26 18:09:16.505831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.656 [2024-11-26 18:09:16.505841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.656 [2024-11-26 18:09:16.506435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.656 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.944 [2024-11-26 18:09:16.648399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.944 Malloc0 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.944 [2024-11-26 18:09:16.695796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=525155 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 525155 /var/tmp/bdevperf.sock 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 525155 ']' 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.944 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.944 [2024-11-26 18:09:16.741885] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:28.945 [2024-11-26 18:09:16.741945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525155 ] 00:13:28.945 [2024-11-26 18:09:16.807323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.945 [2024-11-26 18:09:16.864870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.202 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.202 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:29.202 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:29.202 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.202 18:09:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:29.202 NVMe0n1 00:13:29.202 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.202 18:09:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:29.459 Running I/O for 10 seconds... 00:13:31.354 8192.00 IOPS, 32.00 MiB/s [2024-11-26T17:09:20.296Z] 8452.50 IOPS, 33.02 MiB/s [2024-11-26T17:09:21.668Z] 8520.67 IOPS, 33.28 MiB/s [2024-11-26T17:09:22.234Z] 8453.25 IOPS, 33.02 MiB/s [2024-11-26T17:09:23.609Z] 8580.20 IOPS, 33.52 MiB/s [2024-11-26T17:09:24.544Z] 8548.67 IOPS, 33.39 MiB/s [2024-11-26T17:09:25.478Z] 8614.00 IOPS, 33.65 MiB/s [2024-11-26T17:09:26.413Z] 8593.50 IOPS, 33.57 MiB/s [2024-11-26T17:09:27.347Z] 8638.56 IOPS, 33.74 MiB/s [2024-11-26T17:09:27.347Z] 8649.40 IOPS, 33.79 MiB/s 00:13:39.336 Latency(us) 00:13:39.336 [2024-11-26T17:09:27.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.336 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:39.336 Verification LBA range: start 0x0 length 0x4000 00:13:39.336 NVMe0n1 : 10.08 8676.53 33.89 0.00 0.00 117452.99 21748.24 71070.15 00:13:39.336 [2024-11-26T17:09:27.347Z] =================================================================================================================== 00:13:39.336 [2024-11-26T17:09:27.347Z] Total : 8676.53 33.89 0.00 0.00 117452.99 21748.24 71070.15 00:13:39.336 { 00:13:39.336 "results": [ 00:13:39.336 { 00:13:39.336 "job": "NVMe0n1", 00:13:39.336 "core_mask": "0x1", 00:13:39.336 "workload": "verify", 00:13:39.336 "status": "finished", 00:13:39.336 "verify_range": { 00:13:39.336 "start": 0, 00:13:39.336 "length": 16384 00:13:39.336 }, 00:13:39.336 "queue_depth": 1024, 00:13:39.336 "io_size": 4096, 00:13:39.336 "runtime": 10.081567, 00:13:39.336 "iops": 8676.52816273502, 00:13:39.336 "mibps": 33.89268813568367, 00:13:39.336 "io_failed": 0, 00:13:39.336 "io_timeout": 0, 00:13:39.336 "avg_latency_us": 117452.99468292228, 00:13:39.336 "min_latency_us": 21748.242962962962, 00:13:39.336 "max_latency_us": 71070.15111111112 00:13:39.336 } 00:13:39.336 ], 00:13:39.336 "core_count": 1 00:13:39.336 } 00:13:39.336 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 525155 00:13:39.336 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 525155 ']' 00:13:39.336 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 525155 00:13:39.336 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:39.336 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.336 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 525155 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 525155' 00:13:39.595 killing process with pid 525155 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 525155 00:13:39.595 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.595 00:13:39.595 Latency(us) 00:13:39.595 [2024-11-26T17:09:27.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.595 [2024-11-26T17:09:27.606Z] =================================================================================================================== 00:13:39.595 [2024-11-26T17:09:27.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 525155 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.595 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.854 rmmod nvme_tcp 00:13:39.854 rmmod nvme_fabrics 00:13:39.854 rmmod nvme_keyring 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 525131 ']' 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 525131 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 525131 ']' 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 525131 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 525131 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 525131' 00:13:39.854 killing process with pid 525131 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 525131 00:13:39.854 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 525131 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.118 18:09:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.025 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.025 00:13:42.025 real 0m16.209s 00:13:42.025 user 0m22.636s 00:13:42.025 sys 0m3.170s 00:13:42.025 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.025 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.025 ************************************ 00:13:42.025 END TEST nvmf_queue_depth 00:13:42.025 ************************************ 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:42.283 ************************************ 00:13:42.283 START TEST nvmf_target_multipath 00:13:42.283 ************************************ 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:42.283 * Looking for test storage... 00:13:42.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.283 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.284 --rc genhtml_branch_coverage=1 00:13:42.284 --rc genhtml_function_coverage=1 00:13:42.284 --rc genhtml_legend=1 00:13:42.284 --rc geninfo_all_blocks=1 00:13:42.284 --rc geninfo_unexecuted_blocks=1 00:13:42.284 00:13:42.284 ' 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.284 --rc genhtml_branch_coverage=1 00:13:42.284 --rc genhtml_function_coverage=1 00:13:42.284 --rc genhtml_legend=1 00:13:42.284 --rc geninfo_all_blocks=1 00:13:42.284 --rc geninfo_unexecuted_blocks=1 00:13:42.284 00:13:42.284 ' 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.284 --rc genhtml_branch_coverage=1 00:13:42.284 --rc genhtml_function_coverage=1 00:13:42.284 --rc genhtml_legend=1 00:13:42.284 --rc geninfo_all_blocks=1 00:13:42.284 --rc geninfo_unexecuted_blocks=1 00:13:42.284 00:13:42.284 ' 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.284 --rc genhtml_branch_coverage=1 00:13:42.284 --rc genhtml_function_coverage=1 00:13:42.284 --rc genhtml_legend=1 00:13:42.284 --rc geninfo_all_blocks=1 00:13:42.284 --rc geninfo_unexecuted_blocks=1 00:13:42.284 00:13:42.284 ' 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.284 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.285 18:09:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:44.820 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:44.820 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:44.820 Found net devices under 0000:09:00.0: cvl_0_0 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:44.820 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:44.821 Found net devices under 0000:09:00.1: cvl_0_1 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:44.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:13:44.821 00:13:44.821 --- 10.0.0.2 ping statistics --- 00:13:44.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.821 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:44.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:13:44.821 00:13:44.821 --- 10.0.0.1 ping statistics --- 00:13:44.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.821 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:44.821 only one NIC for nvmf test 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:44.821 rmmod nvme_tcp 00:13:44.821 rmmod nvme_fabrics 00:13:44.821 rmmod nvme_keyring 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.821 18:09:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:46.726 00:13:46.726 real 0m4.567s 00:13:46.726 user 0m0.919s 00:13:46.726 sys 0m1.654s 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:46.726 ************************************ 00:13:46.726 END TEST nvmf_target_multipath 00:13:46.726 ************************************ 00:13:46.726 18:09:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:46.727 18:09:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.727 18:09:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.727 18:09:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:46.727 ************************************ 00:13:46.727 START TEST nvmf_zcopy 00:13:46.727 ************************************ 00:13:46.727 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:46.986 * Looking for test storage... 00:13:46.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.986 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:46.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.986 --rc genhtml_branch_coverage=1 00:13:46.986 --rc genhtml_function_coverage=1 00:13:46.986 --rc genhtml_legend=1 00:13:46.986 --rc geninfo_all_blocks=1 00:13:46.986 --rc geninfo_unexecuted_blocks=1 00:13:46.986 00:13:46.987 ' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:46.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.987 --rc genhtml_branch_coverage=1 00:13:46.987 --rc genhtml_function_coverage=1 00:13:46.987 --rc genhtml_legend=1 00:13:46.987 --rc geninfo_all_blocks=1 00:13:46.987 --rc geninfo_unexecuted_blocks=1 00:13:46.987 00:13:46.987 ' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:46.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.987 --rc genhtml_branch_coverage=1 00:13:46.987 --rc genhtml_function_coverage=1 00:13:46.987 --rc genhtml_legend=1 00:13:46.987 --rc geninfo_all_blocks=1 00:13:46.987 --rc geninfo_unexecuted_blocks=1 00:13:46.987 00:13:46.987 ' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:46.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.987 --rc genhtml_branch_coverage=1 00:13:46.987 --rc genhtml_function_coverage=1 00:13:46.987 --rc genhtml_legend=1 00:13:46.987 --rc geninfo_all_blocks=1 00:13:46.987 --rc geninfo_unexecuted_blocks=1 00:13:46.987 00:13:46.987 ' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:46.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:13:46.987 18:09:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:49.523 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:49.523 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:49.523 Found net devices under 0000:09:00.0: cvl_0_0 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.523 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:49.523 Found net devices under 0000:09:00.1: cvl_0_1 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:49.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:13:49.524 00:13:49.524 --- 10.0.0.2 ping statistics --- 00:13:49.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.524 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:49.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:13:49.524 00:13:49.524 --- 10.0.0.1 ping statistics --- 00:13:49.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.524 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=530367 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 530367 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 530367 ']' 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.524 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.524 [2024-11-26 18:09:37.311565] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:49.524 [2024-11-26 18:09:37.311655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.524 [2024-11-26 18:09:37.387002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.524 [2024-11-26 18:09:37.446925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.524 [2024-11-26 18:09:37.446972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.524 [2024-11-26 18:09:37.446985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.524 [2024-11-26 18:09:37.446996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.524 [2024-11-26 18:09:37.447006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.524 [2024-11-26 18:09:37.447581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.782 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.782 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:49.782 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 [2024-11-26 18:09:37.600268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 [2024-11-26 18:09:37.616486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 malloc0 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:49.783 { 00:13:49.783 "params": { 00:13:49.783 "name": "Nvme$subsystem", 00:13:49.783 "trtype": "$TEST_TRANSPORT", 00:13:49.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:49.783 "adrfam": "ipv4", 00:13:49.783 "trsvcid": "$NVMF_PORT", 00:13:49.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:49.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:49.783 "hdgst": ${hdgst:-false}, 00:13:49.783 "ddgst": ${ddgst:-false} 00:13:49.783 }, 00:13:49.783 "method": "bdev_nvme_attach_controller" 00:13:49.783 } 00:13:49.783 EOF 00:13:49.783 )") 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:13:49.783 18:09:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:49.783 "params": { 00:13:49.783 "name": "Nvme1", 00:13:49.783 "trtype": "tcp", 00:13:49.783 "traddr": "10.0.0.2", 00:13:49.783 "adrfam": "ipv4", 00:13:49.783 "trsvcid": "4420", 00:13:49.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:49.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:49.783 "hdgst": false, 00:13:49.783 "ddgst": false 00:13:49.783 }, 00:13:49.783 "method": "bdev_nvme_attach_controller" 00:13:49.783 }' 00:13:49.783 [2024-11-26 18:09:37.698067] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:13:49.783 [2024-11-26 18:09:37.698159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530529 ] 00:13:49.783 [2024-11-26 18:09:37.765413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.041 [2024-11-26 18:09:37.826272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.041 Running I/O for 10 seconds... 00:13:52.357 5856.00 IOPS, 45.75 MiB/s [2024-11-26T17:09:41.302Z] 5869.50 IOPS, 45.86 MiB/s [2024-11-26T17:09:42.235Z] 5868.33 IOPS, 45.85 MiB/s [2024-11-26T17:09:43.168Z] 5882.75 IOPS, 45.96 MiB/s [2024-11-26T17:09:44.100Z] 5880.80 IOPS, 45.94 MiB/s [2024-11-26T17:09:45.468Z] 5887.83 IOPS, 46.00 MiB/s [2024-11-26T17:09:46.400Z] 5892.57 IOPS, 46.04 MiB/s [2024-11-26T17:09:47.406Z] 5898.50 IOPS, 46.08 MiB/s [2024-11-26T17:09:48.340Z] 5893.56 IOPS, 46.04 MiB/s [2024-11-26T17:09:48.340Z] 5897.50 IOPS, 46.07 MiB/s 00:14:00.329 Latency(us) 00:14:00.329 [2024-11-26T17:09:48.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.329 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:00.329 Verification LBA range: start 0x0 length 0x1000 00:14:00.329 Nvme1n1 : 10.02 5901.05 46.10 0.00 0.00 21632.40 3592.34 30486.38 00:14:00.329 [2024-11-26T17:09:48.340Z] =================================================================================================================== 00:14:00.329 [2024-11-26T17:09:48.340Z] Total : 5901.05 46.10 0.00 0.00 21632.40 3592.34 30486.38 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=531728 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:00.329 { 00:14:00.329 "params": { 00:14:00.329 "name": "Nvme$subsystem", 00:14:00.329 "trtype": "$TEST_TRANSPORT", 00:14:00.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:00.329 "adrfam": "ipv4", 00:14:00.329 "trsvcid": "$NVMF_PORT", 00:14:00.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:00.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:00.329 "hdgst": ${hdgst:-false}, 00:14:00.329 "ddgst": ${ddgst:-false} 00:14:00.329 }, 00:14:00.329 "method": "bdev_nvme_attach_controller" 00:14:00.329 } 00:14:00.329 EOF 00:14:00.329 )") 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:00.329 [2024-11-26 18:09:48.287692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.329 [2024-11-26 18:09:48.287730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:00.329 18:09:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:00.329 "params": { 00:14:00.329 "name": "Nvme1", 00:14:00.330 "trtype": "tcp", 00:14:00.330 "traddr": "10.0.0.2", 00:14:00.330 "adrfam": "ipv4", 00:14:00.330 "trsvcid": "4420", 00:14:00.330 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.330 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.330 "hdgst": false, 00:14:00.330 "ddgst": false 00:14:00.330 }, 00:14:00.330 "method": "bdev_nvme_attach_controller" 00:14:00.330 }' 00:14:00.330 [2024-11-26 18:09:48.295659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.330 [2024-11-26 18:09:48.295682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.330 [2024-11-26 18:09:48.303674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.330 [2024-11-26 18:09:48.303695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.330 [2024-11-26 18:09:48.311681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.330 [2024-11-26 18:09:48.311702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.330 [2024-11-26 18:09:48.319686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.330 [2024-11-26 18:09:48.319707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.330 [2024-11-26 18:09:48.327721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.330 [2024-11-26 18:09:48.327741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.330 [2024-11-26 18:09:48.328002] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:14:00.330 [2024-11-26 18:09:48.328059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531728 ] 00:14:00.330 [2024-11-26 18:09:48.335739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.330 [2024-11-26 18:09:48.335761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.343755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.343775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.351777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.351797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.359803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.359824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.367820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.367840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.375842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.375863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.383865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.383885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.391886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.391906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.398360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.589 [2024-11-26 18:09:48.399908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.399928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.407962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.407992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.415984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.416016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.423974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.423994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.431996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.432016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.440017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.440037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.448040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.448060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.456062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.456089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.459778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.589 [2024-11-26 18:09:48.464084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.464105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.472106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.472125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.480157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.480186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.488178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.488210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.496197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.496229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.504227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.504260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.512251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.512300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.520268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.520330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.528266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.528309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.536328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.536379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.544365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.544395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.552394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.552435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.560403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.560435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.568386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.568409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.576422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.576444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.584613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.584639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.589 [2024-11-26 18:09:48.592624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.589 [2024-11-26 18:09:48.592663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 [2024-11-26 18:09:48.600652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.600690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 [2024-11-26 18:09:48.608673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.608696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 [2024-11-26 18:09:48.616709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.616733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 [2024-11-26 18:09:48.624717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.624740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 [2024-11-26 18:09:48.632744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.632767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 [2024-11-26 18:09:48.640769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.640792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 [2024-11-26 18:09:48.648814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.648839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 [2024-11-26 18:09:48.656834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.656858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.847 Running I/O for 5 seconds... 00:14:00.847 [2024-11-26 18:09:48.664869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.847 [2024-11-26 18:09:48.664898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.678529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.678559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.689378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.689406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.700475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.700503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.712974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.713018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.722369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.722396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.733418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.733446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.743585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.743613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.754239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.754267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.764807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.764834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.775697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.775724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.788264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.788315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.798829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.798870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.809810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.809837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.820720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.820747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.831961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.831989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.845317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.845345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:00.848 [2024-11-26 18:09:48.855420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:00.848 [2024-11-26 18:09:48.855448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.865849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.865876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.876448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.876488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.887449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.887478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.898158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.898186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.910767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.910802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.921048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.921076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.931419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.931447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.941894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.941932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.952634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.952662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.963260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.963313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.976195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.976222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.988202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.988229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:48.997521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:48.997550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.009340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.009369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.022316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.022345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.032728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.032756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.043011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.043039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.053466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.053494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.064026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.064053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.074661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.074689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.085475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.085503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.098486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.098514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.106 [2024-11-26 18:09:49.108909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.106 [2024-11-26 18:09:49.108936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.119783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.119817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.132042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.132070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.141949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.141977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.152557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.152585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.163383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.163410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.174279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.174330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.184727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.184754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.195375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.195403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.364 [2024-11-26 18:09:49.208869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.364 [2024-11-26 18:09:49.208897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.219084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.219111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.229724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.229751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.240248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.240275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.250847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.250872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.261551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.261594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.272660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.272687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.283776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.283803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.296815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.296843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.307016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.307043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.317628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.317655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.328034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.328088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.338775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.338802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.349119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.349147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.359654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.359681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.365 [2024-11-26 18:09:49.372397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.365 [2024-11-26 18:09:49.372425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.623 [2024-11-26 18:09:49.382681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.623 [2024-11-26 18:09:49.382709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.623 [2024-11-26 18:09:49.393437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.623 [2024-11-26 18:09:49.393466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.623 [2024-11-26 18:09:49.404059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.623 [2024-11-26 18:09:49.404086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.623 [2024-11-26 18:09:49.414765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.623 [2024-11-26 18:09:49.414793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.427131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.427158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.436661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.436689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.447709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.447736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.458424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.458453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.470699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.470727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.480320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.480347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.491871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.491900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.505009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.505036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.515395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.515423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.526256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.526285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.539512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.539547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.549600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.549627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.560068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.560095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.570297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.570333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.580836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.580863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.591389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.591418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.602164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.602191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.614666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.614694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.624 [2024-11-26 18:09:49.624552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.624 [2024-11-26 18:09:49.624594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.635681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.635709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.646631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.646658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.661030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.661057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 11784.00 IOPS, 92.06 MiB/s [2024-11-26T17:09:49.893Z] [2024-11-26 18:09:49.671380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.671408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.682352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.682380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.693252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.693279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.704041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.704067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.714554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.714583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.725659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.725702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.738395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.738424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.748656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.748699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.759412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.759439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.770081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.770122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.780735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.780762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.791684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.882 [2024-11-26 18:09:49.791710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.882 [2024-11-26 18:09:49.802723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.883 [2024-11-26 18:09:49.802750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.883 [2024-11-26 18:09:49.815993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.883 [2024-11-26 18:09:49.816021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.883 [2024-11-26 18:09:49.826064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.883 [2024-11-26 18:09:49.826104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.883 [2024-11-26 18:09:49.837003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.883 [2024-11-26 18:09:49.837029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.883 [2024-11-26 18:09:49.849457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.883 [2024-11-26 18:09:49.849485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.883 [2024-11-26 18:09:49.859506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.883 [2024-11-26 18:09:49.859534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.883 [2024-11-26 18:09:49.870122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.883 [2024-11-26 18:09:49.870148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:01.883 [2024-11-26 18:09:49.882698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:01.883 [2024-11-26 18:09:49.882725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.892711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.892739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.903267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.903320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.914278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.914328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.924567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.924610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.935159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.935186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.947902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.947929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.957170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.957196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.968440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.968468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.979087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.979113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:49.989584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:49.989627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.000476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.000503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.011318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.011356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.023013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.023043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.033617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.033647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.044251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.044281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.055874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.055906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.066974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.067001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.077920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.077962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.088569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.088612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.101510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.101539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.112020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.112049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.122713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.122754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.133463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.133491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.141 [2024-11-26 18:09:50.144497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.141 [2024-11-26 18:09:50.144524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.156864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.156900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.166981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.167008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.177604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.177647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.188049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.188076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.198723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.198750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.209384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.209411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.221802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.221844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.232154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.232181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.242857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.242885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.255765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.255805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.266059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.266088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.276476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.276504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.287314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.287343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.298437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.298465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.312095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.312122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.322703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.322731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.333282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.333333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.346122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.346166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.357962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.357989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.366925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.366959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.378791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.378818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.389632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.389659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.399 [2024-11-26 18:09:50.400205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.399 [2024-11-26 18:09:50.400231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.413111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.413138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.423052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.423079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.434043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.434070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.444546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.444574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.454883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.454911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.465507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.465535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.478273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.478324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.488726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.488754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.499251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.499278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.509712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.509739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.520260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.520314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.532998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.533025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.543241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.543269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.554453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.554481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.566933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.566959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.577053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.577087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.587708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.587735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.598127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.598154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.608648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.608675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.619613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.619641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.630294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.630331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.641141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.641168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 [2024-11-26 18:09:50.653717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.653744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.659 11827.00 IOPS, 92.40 MiB/s [2024-11-26T17:09:50.670Z] [2024-11-26 18:09:50.665480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.659 [2024-11-26 18:09:50.665509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.917 [2024-11-26 18:09:50.674560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.917 [2024-11-26 18:09:50.674602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.917 [2024-11-26 18:09:50.685938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.917 [2024-11-26 18:09:50.685964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.917 [2024-11-26 18:09:50.696546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.917 [2024-11-26 18:09:50.696585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.707210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.707237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.720141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.720168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.730527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.730554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.741116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.741158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.753838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.753865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.763805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.763832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.774418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.774446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.785109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.785137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.797704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.797731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.806496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.806524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.817610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.817637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.828787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.828815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.839654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.839682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.852232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.852259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.862348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.862378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.872579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.872621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.883275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.883327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.893690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.893717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.904327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.904355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.914647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.914689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.918 [2024-11-26 18:09:50.925135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:02.918 [2024-11-26 18:09:50.925163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:50.936133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:50.936161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:50.946698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:50.946725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:50.957008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:50.957036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:50.967181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:50.967209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:50.977830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:50.977857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:50.990096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:50.990124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:50.999410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:50.999438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:51.010152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:51.010180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.189 [2024-11-26 18:09:51.020425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.189 [2024-11-26 18:09:51.020454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.030970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.030997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.041601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.041629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.052200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.052228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.064817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.064845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.076411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.076439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.085116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.085144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.096379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.096406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.109319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.109346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.119703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.119731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.130124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.130152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.140713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.140741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.151199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.151227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.162081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.162109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.172900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.172927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.183729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.183758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.190 [2024-11-26 18:09:51.194852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.190 [2024-11-26 18:09:51.194880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.205254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.205282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.215757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.215785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.226386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.226415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.237171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.237198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.248149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.248176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.258769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.258796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.269897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.269924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.281106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.281134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.291541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.291569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.302048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.302077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.312683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.312710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.323461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.323489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.334056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.334083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.344804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.344831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.355828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.355854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.368208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.368235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.378340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.378368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.389069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.389097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.399966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.399993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.410399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.410425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.421053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.421080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.432027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.432054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.442456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.442484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.450 [2024-11-26 18:09:51.453200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.450 [2024-11-26 18:09:51.453228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.464048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.464074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.474933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.474961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.487549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.487577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.499552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.499580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.509334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.509372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.520176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.520204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.532797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.532823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.542506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.542534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.553488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.553516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.564219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.564246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.575045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.575071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.586053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.586080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.597048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.597098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.609696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.609723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.621550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.621578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.631101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.631128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.643044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.709 [2024-11-26 18:09:51.643071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.709 [2024-11-26 18:09:51.653815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.710 [2024-11-26 18:09:51.653842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.710 [2024-11-26 18:09:51.664607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.710 [2024-11-26 18:09:51.664634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.710 11858.00 IOPS, 92.64 MiB/s [2024-11-26T17:09:51.721Z] [2024-11-26 18:09:51.674885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.710 [2024-11-26 18:09:51.674912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.710 [2024-11-26 18:09:51.685754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.710 [2024-11-26 18:09:51.685781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.710 [2024-11-26 18:09:51.696759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.710 [2024-11-26 18:09:51.696786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.710 [2024-11-26 18:09:51.707649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.710 [2024-11-26 18:09:51.707677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.710 [2024-11-26 18:09:51.718220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.710 [2024-11-26 18:09:51.718247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.729174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.729201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.742060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.742086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.752563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.752607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.763330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.763358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.775976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.776003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.785944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.785971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.796342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.796370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.806899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.806933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.818001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.818028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.828964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.828992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.839314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.839342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.850133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.850159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.860958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.860985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.874604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.874631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.885071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.885097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.895679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.895720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.906501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.906529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.917123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.917149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.929652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.929679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.939315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.939342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.950190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.950217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.960858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.960885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:03.969 [2024-11-26 18:09:51.971326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:03.969 [2024-11-26 18:09:51.971354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:51.981924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:51.981952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:51.992386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:51.992413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.003024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.003051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.014081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.014116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.027119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.027146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.037514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.037542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.048379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.048407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.062452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.062481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.072914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.072941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.083550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.083578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.094426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.094453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.105669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.105695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.116228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.116255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.127019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.127046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.139839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.139866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.151515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.151543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.161536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.161571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.172070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.172096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.182806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.182847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.193768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.193795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.204706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.204732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.216845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.216871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.226278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.226328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.228 [2024-11-26 18:09:52.237193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.228 [2024-11-26 18:09:52.237221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.247660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.247687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.261049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.261075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.273842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.273869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.284015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.284041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.294634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.294674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.305427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.305455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.316102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.316130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.327033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.327059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.340399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.340427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.350672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.350699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.361474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.361501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.487 [2024-11-26 18:09:52.373680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.487 [2024-11-26 18:09:52.373708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.383274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.383328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.394258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.394298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.405271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.405301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.417695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.417721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.427783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.427810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.438390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.438417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.448937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.448964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.459301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.459337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.470116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.470143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.482435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.482463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.488 [2024-11-26 18:09:52.492799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.488 [2024-11-26 18:09:52.492826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.745 [2024-11-26 18:09:52.503766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.745 [2024-11-26 18:09:52.503793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.745 [2024-11-26 18:09:52.516241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.745 [2024-11-26 18:09:52.516268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.745 [2024-11-26 18:09:52.526438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.745 [2024-11-26 18:09:52.526466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.537422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.537465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.549809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.549837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.559996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.560023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.570885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.570912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.583446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.583474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.593536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.593563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.604444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.604487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.617296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.617331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.627737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.627764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.638596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.638623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.649598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.649625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.660236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.660263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 11866.00 IOPS, 92.70 MiB/s [2024-11-26T17:09:52.757Z] [2024-11-26 18:09:52.673413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.673443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.683745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.683772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.694324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.694352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.708100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.708129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.718494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.718522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.728979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.729007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.739258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.739299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:04.746 [2024-11-26 18:09:52.750269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:04.746 [2024-11-26 18:09:52.750296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.762586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.762630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.772370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.772398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.783782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.783809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.794498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.794526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.805185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.805213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.816205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.816232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.827133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.827160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.839757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.839784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.849921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.849955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.860688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.860716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.873269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.873319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.882520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.882548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.893964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.893990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.906828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.906855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.916863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.916892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.927647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.927674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.938297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.938335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.949059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.949088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.959930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.959957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.973122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.973149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.983344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.004 [2024-11-26 18:09:52.983372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.004 [2024-11-26 18:09:52.994173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.005 [2024-11-26 18:09:52.994200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.005 [2024-11-26 18:09:53.008431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.005 [2024-11-26 18:09:53.008459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.018611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.018638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.029473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.029502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.042045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.042072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.051839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.051866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.063341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.063376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.075668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.075695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.085824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.085851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.096692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.096720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.107323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.107351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.262 [2024-11-26 18:09:53.118132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.262 [2024-11-26 18:09:53.118159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.130892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.130920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.141321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.141350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.152001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.152028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.164773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.164800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.176593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.176621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.186254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.186281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.197225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.197267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.209737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.209765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.220076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.220103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.230522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.230550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.240929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.240957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.251217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.251244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.263 [2024-11-26 18:09:53.262026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.263 [2024-11-26 18:09:53.262053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.274399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.521 [2024-11-26 18:09:53.274435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.283613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.521 [2024-11-26 18:09:53.283640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.296677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.521 [2024-11-26 18:09:53.296705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.307316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.521 [2024-11-26 18:09:53.307344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.318576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.521 [2024-11-26 18:09:53.318619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.331398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.521 [2024-11-26 18:09:53.331426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.343184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.521 [2024-11-26 18:09:53.343211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.352332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.521 [2024-11-26 18:09:53.352360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.521 [2024-11-26 18:09:53.363746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.363773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.374420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.374449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.385154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.385181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.398269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.398321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.408609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.408637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.419455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.419483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.430035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.430063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.441297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.441336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.454099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.454126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.464687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.464714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.475522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.475549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.489422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.489458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.499837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.499864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.510942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.510969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.522 [2024-11-26 18:09:53.523350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.522 [2024-11-26 18:09:53.523378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.533684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.533712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.544942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.544968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.561022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.561051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.571232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.571258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.582390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.582418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.593350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.593377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.604036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.604062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.617297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.617337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.627809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.627836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.638671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.638698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.651466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.651495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.661252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.661278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 11852.60 IOPS, 92.60 MiB/s [2024-11-26T17:09:53.791Z] [2024-11-26 18:09:53.672750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.672777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.682099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.682125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 00:14:05.780 Latency(us) 00:14:05.780 [2024-11-26T17:09:53.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.780 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:05.780 Nvme1n1 : 5.01 11855.06 92.62 0.00 0.00 10783.39 4660.34 18738.44 00:14:05.780 [2024-11-26T17:09:53.791Z] =================================================================================================================== 00:14:05.780 [2024-11-26T17:09:53.791Z] Total : 11855.06 92.62 0.00 0.00 10783.39 4660.34 18738.44 00:14:05.780 [2024-11-26 18:09:53.688067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.688093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.696083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.696107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.704104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.704127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.712188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.712233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.720208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.720254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.728223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.728264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.736240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.736284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.744268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.744319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.752292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.752340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.760317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.760367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.768343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.768394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.776370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.776420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:05.780 [2024-11-26 18:09:53.784392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:05.780 [2024-11-26 18:09:53.784442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.792415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.792456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.800437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.800479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.808459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.808500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.816477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.816521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.824490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.824533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.832465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.832489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.840480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.840501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.848501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.848522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.856521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.856541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.864561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.864587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.872620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.872662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.880645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.880684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.888628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.888663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.896656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.896676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 [2024-11-26 18:09:53.904676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:06.039 [2024-11-26 18:09:53.904696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (531728) - No such process 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 531728 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:06.039 delay0 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.039 18:09:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:06.039 [2024-11-26 18:09:54.026175] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:12.594 Initializing NVMe Controllers 00:14:12.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:12.594 Initialization complete. Launching workers. 00:14:12.594 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 82 00:14:12.594 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 369, failed to submit 33 00:14:12.594 success 168, unsuccessful 201, failed 0 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:12.594 rmmod nvme_tcp 00:14:12.594 rmmod nvme_fabrics 00:14:12.594 rmmod nvme_keyring 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 530367 ']' 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 530367 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 530367 ']' 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 530367 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530367 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530367' 00:14:12.594 killing process with pid 530367 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 530367 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 530367 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.594 18:10:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:15.133 00:14:15.133 real 0m27.844s 00:14:15.133 user 0m40.858s 00:14:15.133 sys 0m8.295s 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:15.133 ************************************ 00:14:15.133 END TEST nvmf_zcopy 00:14:15.133 ************************************ 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:15.133 ************************************ 00:14:15.133 START TEST nvmf_nmic 00:14:15.133 ************************************ 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:15.133 * Looking for test storage... 00:14:15.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:15.133 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.134 --rc genhtml_branch_coverage=1 00:14:15.134 --rc genhtml_function_coverage=1 00:14:15.134 --rc genhtml_legend=1 00:14:15.134 --rc geninfo_all_blocks=1 00:14:15.134 --rc geninfo_unexecuted_blocks=1 00:14:15.134 00:14:15.134 ' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.134 --rc genhtml_branch_coverage=1 00:14:15.134 --rc genhtml_function_coverage=1 00:14:15.134 --rc genhtml_legend=1 00:14:15.134 --rc geninfo_all_blocks=1 00:14:15.134 --rc geninfo_unexecuted_blocks=1 00:14:15.134 00:14:15.134 ' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.134 --rc genhtml_branch_coverage=1 00:14:15.134 --rc genhtml_function_coverage=1 00:14:15.134 --rc genhtml_legend=1 00:14:15.134 --rc geninfo_all_blocks=1 00:14:15.134 --rc geninfo_unexecuted_blocks=1 00:14:15.134 00:14:15.134 ' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:15.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.134 --rc genhtml_branch_coverage=1 00:14:15.134 --rc genhtml_function_coverage=1 00:14:15.134 --rc genhtml_legend=1 00:14:15.134 --rc geninfo_all_blocks=1 00:14:15.134 --rc geninfo_unexecuted_blocks=1 00:14:15.134 00:14:15.134 ' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:15.134 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:14:15.135 18:10:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:17.037 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:17.037 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:17.037 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:17.038 Found net devices under 0000:09:00.0: cvl_0_0 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:17.038 Found net devices under 0000:09:00.1: cvl_0_1 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.038 18:10:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:17.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:14:17.297 00:14:17.297 --- 10.0.0.2 ping statistics --- 00:14:17.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.297 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:14:17.297 00:14:17.297 --- 10.0.0.1 ping statistics --- 00:14:17.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.297 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.297 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=535128 00:14:17.298 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.298 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 535128 00:14:17.298 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 535128 ']' 00:14:17.298 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.298 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.298 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.298 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.298 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.298 [2024-11-26 18:10:05.208984] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:14:17.298 [2024-11-26 18:10:05.209072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.298 [2024-11-26 18:10:05.288723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.556 [2024-11-26 18:10:05.350050] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.556 [2024-11-26 18:10:05.350134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.556 [2024-11-26 18:10:05.350148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.556 [2024-11-26 18:10:05.350159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.556 [2024-11-26 18:10:05.350169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.556 [2024-11-26 18:10:05.351879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.556 [2024-11-26 18:10:05.351904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.556 [2024-11-26 18:10:05.351958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.556 [2024-11-26 18:10:05.351961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.556 [2024-11-26 18:10:05.504109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.556 Malloc0 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.556 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.814 [2024-11-26 18:10:05.575988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:17.814 test case1: single bdev can't be used in multiple subsystems 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.814 [2024-11-26 18:10:05.599798] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:17.814 [2024-11-26 18:10:05.599827] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:17.814 [2024-11-26 18:10:05.599865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.814 request: 00:14:17.814 { 00:14:17.814 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:17.814 "namespace": { 00:14:17.814 "bdev_name": "Malloc0", 00:14:17.814 "no_auto_visible": false 00:14:17.814 }, 00:14:17.814 "method": "nvmf_subsystem_add_ns", 00:14:17.814 "req_id": 1 00:14:17.814 } 00:14:17.814 Got JSON-RPC error response 00:14:17.814 response: 00:14:17.814 { 00:14:17.814 "code": -32602, 00:14:17.814 "message": "Invalid parameters" 00:14:17.814 } 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:17.814 Adding namespace failed - expected result. 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:17.814 test case2: host connect to nvmf target in multiple paths 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:17.814 [2024-11-26 18:10:05.607923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.814 18:10:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.380 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:18.949 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.949 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:14:18.949 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.949 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:18.949 18:10:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.474 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.474 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.474 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.474 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.474 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.474 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:21.474 18:10:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:21.474 [global] 00:14:21.474 thread=1 00:14:21.474 invalidate=1 00:14:21.474 rw=write 00:14:21.474 time_based=1 00:14:21.474 runtime=1 00:14:21.474 ioengine=libaio 00:14:21.474 direct=1 00:14:21.474 bs=4096 00:14:21.474 iodepth=1 00:14:21.474 norandommap=0 00:14:21.474 numjobs=1 00:14:21.474 00:14:21.474 verify_dump=1 00:14:21.474 verify_backlog=512 00:14:21.474 verify_state_save=0 00:14:21.474 do_verify=1 00:14:21.474 verify=crc32c-intel 00:14:21.474 [job0] 00:14:21.474 filename=/dev/nvme0n1 00:14:21.474 Could not set queue depth (nvme0n1) 00:14:21.474 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:21.474 fio-3.35 00:14:21.474 Starting 1 thread 00:14:22.407 00:14:22.407 job0: (groupid=0, jobs=1): err= 0: pid=535648: Tue Nov 26 18:10:10 2024 00:14:22.407 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:14:22.407 slat (nsec): min=10386, max=36953, avg=24736.52, stdev=9566.31 00:14:22.407 clat (usec): min=40696, max=44981, avg=41814.98, stdev=880.25 00:14:22.407 lat (usec): min=40707, max=45009, avg=41839.72, stdev=882.12 00:14:22.407 clat percentiles (usec): 00:14:22.407 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:14:22.407 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:14:22.407 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:22.407 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:14:22.407 | 99.99th=[44827] 00:14:22.407 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:14:22.408 slat (usec): min=10, max=29711, avg=78.78, stdev=1312.16 00:14:22.408 clat (usec): min=135, max=292, avg=176.74, stdev=16.08 00:14:22.408 lat (usec): min=146, max=30003, avg=255.52, stdev=1317.39 00:14:22.408 clat percentiles (usec): 00:14:22.408 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 167], 00:14:22.408 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:14:22.408 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 198], 00:14:22.408 | 99.00th=[ 215], 99.50th=[ 260], 99.90th=[ 293], 99.95th=[ 293], 00:14:22.408 | 99.99th=[ 293] 00:14:22.408 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:22.408 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:22.408 lat (usec) : 250=95.50%, 500=0.56% 00:14:22.408 lat (msec) : 50=3.94% 00:14:22.408 cpu : usr=0.69%, sys=1.38%, ctx=535, majf=0, minf=1 00:14:22.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.408 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.408 00:14:22.408 Run status group 0 (all jobs): 00:14:22.408 READ: bw=82.9KiB/s (84.9kB/s), 82.9KiB/s-82.9KiB/s (84.9kB/s-84.9kB/s), io=84.0KiB (86.0kB), run=1013-1013msec 00:14:22.408 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:14:22.408 00:14:22.408 Disk stats (read/write): 00:14:22.408 nvme0n1: ios=70/512, merge=0/0, ticks=1020/85, in_queue=1105, util=98.70% 00:14:22.408 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.664 rmmod nvme_tcp 00:14:22.664 rmmod nvme_fabrics 00:14:22.664 rmmod nvme_keyring 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 535128 ']' 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 535128 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 535128 ']' 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 535128 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 535128 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 535128' 00:14:22.664 killing process with pid 535128 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 535128 00:14:22.664 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 535128 00:14:22.922 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.922 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.922 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.922 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:23.180 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:23.180 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:23.180 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:23.180 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:23.180 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:23.180 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.180 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.180 18:10:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.084 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:25.084 00:14:25.084 real 0m10.402s 00:14:25.084 user 0m23.271s 00:14:25.084 sys 0m2.581s 00:14:25.084 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.084 18:10:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:25.084 ************************************ 00:14:25.084 END TEST nvmf_nmic 00:14:25.084 ************************************ 00:14:25.084 18:10:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:25.084 18:10:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.084 18:10:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.084 18:10:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:25.084 ************************************ 00:14:25.084 START TEST nvmf_fio_target 00:14:25.084 ************************************ 00:14:25.084 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:25.084 * Looking for test storage... 00:14:25.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.084 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:25.084 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:25.084 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.342 --rc genhtml_branch_coverage=1 00:14:25.342 --rc genhtml_function_coverage=1 00:14:25.342 --rc genhtml_legend=1 00:14:25.342 --rc geninfo_all_blocks=1 00:14:25.342 --rc geninfo_unexecuted_blocks=1 00:14:25.342 00:14:25.342 ' 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.342 --rc genhtml_branch_coverage=1 00:14:25.342 --rc genhtml_function_coverage=1 00:14:25.342 --rc genhtml_legend=1 00:14:25.342 --rc geninfo_all_blocks=1 00:14:25.342 --rc geninfo_unexecuted_blocks=1 00:14:25.342 00:14:25.342 ' 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.342 --rc genhtml_branch_coverage=1 00:14:25.342 --rc genhtml_function_coverage=1 00:14:25.342 --rc genhtml_legend=1 00:14:25.342 --rc geninfo_all_blocks=1 00:14:25.342 --rc geninfo_unexecuted_blocks=1 00:14:25.342 00:14:25.342 ' 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:25.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.342 --rc genhtml_branch_coverage=1 00:14:25.342 --rc genhtml_function_coverage=1 00:14:25.342 --rc genhtml_legend=1 00:14:25.342 --rc geninfo_all_blocks=1 00:14:25.342 --rc geninfo_unexecuted_blocks=1 00:14:25.342 00:14:25.342 ' 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.342 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:25.343 18:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:27.878 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:27.878 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:27.878 Found net devices under 0000:09:00.0: cvl_0_0 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:27.878 Found net devices under 0000:09:00.1: cvl_0_1 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.878 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:27.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:14:27.879 00:14:27.879 --- 10.0.0.2 ping statistics --- 00:14:27.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.879 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:14:27.879 00:14:27.879 --- 10.0.0.1 ping statistics --- 00:14:27.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.879 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=537851 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 537851 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 537851 ']' 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.879 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.879 [2024-11-26 18:10:15.737212] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:14:27.879 [2024-11-26 18:10:15.737293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.879 [2024-11-26 18:10:15.808239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.879 [2024-11-26 18:10:15.864882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.879 [2024-11-26 18:10:15.864933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.879 [2024-11-26 18:10:15.864961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.879 [2024-11-26 18:10:15.864972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.879 [2024-11-26 18:10:15.864981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.879 [2024-11-26 18:10:15.866555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.879 [2024-11-26 18:10:15.866609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.879 [2024-11-26 18:10:15.866684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.879 [2024-11-26 18:10:15.866687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.138 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.138 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:14:28.138 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.138 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:28.138 18:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.138 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.138 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:28.397 [2024-11-26 18:10:16.274175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.397 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:28.656 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:28.656 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:28.914 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:28.914 18:10:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:29.173 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:29.173 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:29.776 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:29.776 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:29.776 18:10:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.059 18:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:30.059 18:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.317 18:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:30.317 18:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:30.882 18:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:30.883 18:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:30.883 18:10:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:31.140 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:31.140 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.704 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:31.704 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.704 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.963 [2024-11-26 18:10:19.946556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.963 18:10:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:32.529 18:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:32.529 18:10:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:33.460 18:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:33.460 18:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:14:33.460 18:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:33.460 18:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:14:33.461 18:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:14:33.461 18:10:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:35.360 18:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:35.360 18:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:35.360 18:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:35.360 18:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:35.360 18:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:35.360 18:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:35.360 18:10:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:35.360 [global] 00:14:35.360 thread=1 00:14:35.360 invalidate=1 00:14:35.360 rw=write 00:14:35.360 time_based=1 00:14:35.360 runtime=1 00:14:35.360 ioengine=libaio 00:14:35.360 direct=1 00:14:35.360 bs=4096 00:14:35.360 iodepth=1 00:14:35.360 norandommap=0 00:14:35.360 numjobs=1 00:14:35.360 00:14:35.360 verify_dump=1 00:14:35.360 verify_backlog=512 00:14:35.360 verify_state_save=0 00:14:35.360 do_verify=1 00:14:35.360 verify=crc32c-intel 00:14:35.360 [job0] 00:14:35.360 filename=/dev/nvme0n1 00:14:35.360 [job1] 00:14:35.360 filename=/dev/nvme0n2 00:14:35.360 [job2] 00:14:35.360 filename=/dev/nvme0n3 00:14:35.360 [job3] 00:14:35.360 filename=/dev/nvme0n4 00:14:35.360 Could not set queue depth (nvme0n1) 00:14:35.360 Could not set queue depth (nvme0n2) 00:14:35.360 Could not set queue depth (nvme0n3) 00:14:35.360 Could not set queue depth (nvme0n4) 00:14:35.618 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.618 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.618 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.618 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:35.618 fio-3.35 00:14:35.618 Starting 4 threads 00:14:36.995 00:14:36.995 job0: (groupid=0, jobs=1): err= 0: pid=538936: Tue Nov 26 18:10:24 2024 00:14:36.995 read: IOPS=1068, BW=4275KiB/s (4378kB/s)(4284KiB/1002msec) 00:14:36.995 slat (nsec): min=5701, max=63819, avg=15190.04, stdev=5492.00 00:14:36.995 clat (usec): min=198, max=41285, avg=616.42, stdev=3772.91 00:14:36.995 lat (usec): min=204, max=41304, avg=631.61, stdev=3773.13 00:14:36.995 clat percentiles (usec): 00:14:36.995 | 1.00th=[ 204], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 237], 00:14:36.995 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:14:36.995 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 297], 00:14:36.995 | 99.00th=[ 441], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:14:36.995 | 99.99th=[41157] 00:14:36.995 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:14:36.995 slat (nsec): min=7738, max=53795, avg=14443.63, stdev=6404.85 00:14:36.995 clat (usec): min=138, max=737, avg=189.27, stdev=22.81 00:14:36.995 lat (usec): min=149, max=748, avg=203.71, stdev=24.79 00:14:36.995 clat percentiles (usec): 00:14:36.995 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 178], 00:14:36.995 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:14:36.995 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 212], 00:14:36.995 | 99.00th=[ 241], 99.50th=[ 269], 99.90th=[ 429], 99.95th=[ 742], 00:14:36.995 | 99.99th=[ 742] 00:14:36.995 bw ( KiB/s): min= 8352, max= 8352, per=52.70%, avg=8352.00, stdev= 0.00, samples=1 00:14:36.995 iops : min= 2088, max= 2088, avg=2088.00, stdev= 0.00, samples=1 00:14:36.995 lat (usec) : 250=77.02%, 500=22.55%, 750=0.04% 00:14:36.995 lat (msec) : 50=0.38% 00:14:36.995 cpu : usr=3.20%, sys=5.00%, ctx=2607, majf=0, minf=1 00:14:36.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:36.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.995 issued rwts: total=1071,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:36.995 job1: (groupid=0, jobs=1): err= 0: pid=538937: Tue Nov 26 18:10:24 2024 00:14:36.995 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:14:36.995 slat (nsec): min=8014, max=37345, avg=27249.05, stdev=10064.32 00:14:36.995 clat (usec): min=1644, max=42004, avg=40012.03, stdev=8575.43 00:14:36.995 lat (usec): min=1659, max=42021, avg=40039.28, stdev=8578.06 00:14:36.995 clat percentiles (usec): 00:14:36.995 | 1.00th=[ 1647], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:14:36.995 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:36.995 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:36.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:36.995 | 99.99th=[42206] 00:14:36.995 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:14:36.995 slat (nsec): min=8298, max=33651, avg=9702.96, stdev=2155.69 00:14:36.995 clat (usec): min=148, max=1208, avg=226.79, stdev=69.07 00:14:36.995 lat (usec): min=158, max=1218, avg=236.49, stdev=69.31 00:14:36.995 clat percentiles (usec): 00:14:36.995 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 176], 00:14:36.995 | 30.00th=[ 210], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 241], 00:14:36.995 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 260], 00:14:36.995 | 99.00th=[ 400], 99.50th=[ 725], 99.90th=[ 1205], 99.95th=[ 1205], 00:14:36.995 | 99.99th=[ 1205] 00:14:36.995 bw ( KiB/s): min= 4096, max= 4096, per=25.84%, avg=4096.00, stdev= 0.00, samples=1 00:14:36.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:36.995 lat (usec) : 250=87.27%, 500=7.87%, 750=0.37%, 1000=0.19% 00:14:36.995 lat (msec) : 2=0.37%, 50=3.93% 00:14:36.995 cpu : usr=0.00%, sys=1.10%, ctx=536, majf=0, minf=1 00:14:36.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:36.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.995 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:36.995 job2: (groupid=0, jobs=1): err= 0: pid=538938: Tue Nov 26 18:10:24 2024 00:14:36.995 read: IOPS=20, BW=83.4KiB/s (85.4kB/s)(84.0KiB/1007msec) 00:14:36.995 slat (nsec): min=7434, max=37968, avg=28553.62, stdev=9571.67 00:14:36.995 clat (usec): min=40938, max=41994, avg=41806.75, stdev=355.69 00:14:36.995 lat (usec): min=40974, max=42013, avg=41835.31, stdev=355.71 00:14:36.995 clat percentiles (usec): 00:14:36.995 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:14:36.995 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:14:36.995 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:36.995 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:36.995 | 99.99th=[42206] 00:14:36.995 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:14:36.995 slat (usec): min=7, max=1027, avg=11.63, stdev=45.02 00:14:36.995 clat (usec): min=157, max=392, avg=235.17, stdev=33.53 00:14:36.995 lat (usec): min=165, max=1324, avg=246.81, stdev=58.45 00:14:36.995 clat percentiles (usec): 00:14:36.995 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 219], 00:14:36.995 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:14:36.995 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:14:36.995 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 392], 99.95th=[ 392], 00:14:36.995 | 99.99th=[ 392] 00:14:36.995 bw ( KiB/s): min= 4096, max= 4096, per=25.84%, avg=4096.00, stdev= 0.00, samples=1 00:14:36.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:36.995 lat (usec) : 250=72.23%, 500=23.83% 00:14:36.995 lat (msec) : 50=3.94% 00:14:36.995 cpu : usr=0.00%, sys=1.09%, ctx=536, majf=0, minf=1 00:14:36.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:36.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.995 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:36.995 job3: (groupid=0, jobs=1): err= 0: pid=538939: Tue Nov 26 18:10:24 2024 00:14:36.995 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:14:36.995 slat (nsec): min=7470, max=47867, avg=14137.43, stdev=5066.65 00:14:36.995 clat (usec): min=193, max=41993, avg=661.08, stdev=4088.89 00:14:36.995 lat (usec): min=202, max=42021, avg=675.22, stdev=4090.19 00:14:36.995 clat percentiles (usec): 00:14:36.995 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 231], 00:14:36.995 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:14:36.995 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 310], 95.00th=[ 383], 00:14:36.995 | 99.00th=[ 424], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:36.995 | 99.99th=[42206] 00:14:36.995 write: IOPS=1428, BW=5714KiB/s (5851kB/s)(5720KiB/1001msec); 0 zone resets 00:14:36.995 slat (nsec): min=5771, max=61036, avg=16373.04, stdev=7361.84 00:14:36.995 clat (usec): min=146, max=421, avg=192.04, stdev=35.21 00:14:36.995 lat (usec): min=153, max=438, avg=208.41, stdev=34.12 00:14:36.995 clat percentiles (usec): 00:14:36.995 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:14:36.995 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:14:36.995 | 70.00th=[ 190], 80.00th=[ 198], 90.00th=[ 245], 95.00th=[ 258], 00:14:36.995 | 99.00th=[ 359], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 424], 00:14:36.995 | 99.99th=[ 424] 00:14:36.995 bw ( KiB/s): min= 4096, max= 4096, per=25.84%, avg=4096.00, stdev= 0.00, samples=1 00:14:36.995 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:36.995 lat (usec) : 250=76.77%, 500=22.82% 00:14:36.995 lat (msec) : 50=0.41% 00:14:36.995 cpu : usr=3.10%, sys=4.70%, ctx=2454, majf=0, minf=1 00:14:36.995 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:36.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:36.995 issued rwts: total=1024,1430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:36.995 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:36.995 00:14:36.995 Run status group 0 (all jobs): 00:14:36.995 READ: bw=8493KiB/s (8696kB/s), 83.4KiB/s-4275KiB/s (85.4kB/s-4378kB/s), io=8552KiB (8757kB), run=1001-1007msec 00:14:36.995 WRITE: bw=15.5MiB/s (16.2MB/s), 2034KiB/s-6132KiB/s (2083kB/s-6279kB/s), io=15.6MiB (16.3MB), run=1001-1007msec 00:14:36.995 00:14:36.995 Disk stats (read/write): 00:14:36.995 nvme0n1: ios=1089/1536, merge=0/0, ticks=458/271, in_queue=729, util=85.67% 00:14:36.995 nvme0n2: ios=67/512, merge=0/0, ticks=1322/116, in_queue=1438, util=88.58% 00:14:36.995 nvme0n3: ios=77/512, merge=0/0, ticks=855/120, in_queue=975, util=92.74% 00:14:36.995 nvme0n4: ios=818/1024, merge=0/0, ticks=686/189, in_queue=875, util=95.44% 00:14:36.995 18:10:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:36.995 [global] 00:14:36.995 thread=1 00:14:36.995 invalidate=1 00:14:36.995 rw=randwrite 00:14:36.995 time_based=1 00:14:36.995 runtime=1 00:14:36.995 ioengine=libaio 00:14:36.995 direct=1 00:14:36.995 bs=4096 00:14:36.995 iodepth=1 00:14:36.995 norandommap=0 00:14:36.995 numjobs=1 00:14:36.995 00:14:36.995 verify_dump=1 00:14:36.995 verify_backlog=512 00:14:36.995 verify_state_save=0 00:14:36.995 do_verify=1 00:14:36.996 verify=crc32c-intel 00:14:36.996 [job0] 00:14:36.996 filename=/dev/nvme0n1 00:14:36.996 [job1] 00:14:36.996 filename=/dev/nvme0n2 00:14:36.996 [job2] 00:14:36.996 filename=/dev/nvme0n3 00:14:36.996 [job3] 00:14:36.996 filename=/dev/nvme0n4 00:14:36.996 Could not set queue depth (nvme0n1) 00:14:36.996 Could not set queue depth (nvme0n2) 00:14:36.996 Could not set queue depth (nvme0n3) 00:14:36.996 Could not set queue depth (nvme0n4) 00:14:36.996 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.996 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.996 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.996 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:36.996 fio-3.35 00:14:36.996 Starting 4 threads 00:14:38.370 00:14:38.370 job0: (groupid=0, jobs=1): err= 0: pid=539172: Tue Nov 26 18:10:26 2024 00:14:38.370 read: IOPS=1991, BW=7965KiB/s (8156kB/s)(8204KiB/1030msec) 00:14:38.370 slat (nsec): min=5113, max=49919, avg=11882.16, stdev=4123.63 00:14:38.370 clat (usec): min=170, max=41049, avg=266.79, stdev=1558.78 00:14:38.370 lat (usec): min=177, max=41065, avg=278.67, stdev=1558.86 00:14:38.370 clat percentiles (usec): 00:14:38.370 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:14:38.370 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:14:38.370 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 253], 00:14:38.370 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[41157], 99.95th=[41157], 00:14:38.370 | 99.99th=[41157] 00:14:38.370 write: IOPS=2485, BW=9942KiB/s (10.2MB/s)(10.0MiB/1030msec); 0 zone resets 00:14:38.370 slat (nsec): min=7025, max=58196, avg=14451.51, stdev=5514.27 00:14:38.370 clat (usec): min=122, max=830, avg=157.84, stdev=35.10 00:14:38.370 lat (usec): min=131, max=838, avg=172.29, stdev=35.60 00:14:38.370 clat percentiles (usec): 00:14:38.370 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 137], 00:14:38.370 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 149], 60.00th=[ 153], 00:14:38.370 | 70.00th=[ 163], 80.00th=[ 180], 90.00th=[ 194], 95.00th=[ 204], 00:14:38.370 | 99.00th=[ 258], 99.50th=[ 285], 99.90th=[ 627], 99.95th=[ 644], 00:14:38.370 | 99.99th=[ 832] 00:14:38.370 bw ( KiB/s): min= 8192, max=12288, per=57.22%, avg=10240.00, stdev=2896.31, samples=2 00:14:38.370 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:14:38.370 lat (usec) : 250=96.83%, 500=2.88%, 750=0.15%, 1000=0.07% 00:14:38.370 lat (msec) : 50=0.07% 00:14:38.371 cpu : usr=2.92%, sys=6.32%, ctx=4612, majf=0, minf=1 00:14:38.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.371 issued rwts: total=2051,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.371 job1: (groupid=0, jobs=1): err= 0: pid=539173: Tue Nov 26 18:10:26 2024 00:14:38.371 read: IOPS=780, BW=3121KiB/s (3196kB/s)(3124KiB/1001msec) 00:14:38.371 slat (nsec): min=7106, max=45890, avg=15918.35, stdev=7694.71 00:14:38.371 clat (usec): min=195, max=41032, avg=990.13, stdev=5199.04 00:14:38.371 lat (usec): min=203, max=41050, avg=1006.05, stdev=5200.52 00:14:38.371 clat percentiles (usec): 00:14:38.371 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 253], 00:14:38.371 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 297], 60.00th=[ 322], 00:14:38.371 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 457], 95.00th=[ 486], 00:14:38.371 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:38.371 | 99.99th=[41157] 00:14:38.371 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:38.371 slat (nsec): min=6175, max=84904, avg=13619.37, stdev=6900.49 00:14:38.371 clat (usec): min=127, max=417, avg=188.77, stdev=25.05 00:14:38.371 lat (usec): min=136, max=432, avg=202.39, stdev=27.90 00:14:38.371 clat percentiles (usec): 00:14:38.371 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 169], 00:14:38.371 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:14:38.371 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:14:38.371 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 310], 99.95th=[ 416], 00:14:38.371 | 99.99th=[ 416] 00:14:38.371 bw ( KiB/s): min= 4096, max= 4096, per=22.89%, avg=4096.00, stdev= 0.00, samples=1 00:14:38.371 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:38.371 lat (usec) : 250=64.76%, 500=33.80%, 750=0.72% 00:14:38.371 lat (msec) : 50=0.72% 00:14:38.371 cpu : usr=1.70%, sys=3.30%, ctx=1807, majf=0, minf=1 00:14:38.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.371 issued rwts: total=781,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.371 job2: (groupid=0, jobs=1): err= 0: pid=539174: Tue Nov 26 18:10:26 2024 00:14:38.371 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:14:38.371 slat (nsec): min=9862, max=39229, avg=26960.09, stdev=9923.96 00:14:38.371 clat (usec): min=256, max=41055, avg=39177.70, stdev=8484.96 00:14:38.371 lat (usec): min=266, max=41072, avg=39204.66, stdev=8488.68 00:14:38.371 clat percentiles (usec): 00:14:38.371 | 1.00th=[ 258], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:14:38.371 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:38.371 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:14:38.371 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:14:38.371 | 99.99th=[41157] 00:14:38.371 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:14:38.371 slat (nsec): min=8781, max=59911, avg=18502.44, stdev=7452.94 00:14:38.371 clat (usec): min=159, max=512, avg=223.56, stdev=29.54 00:14:38.371 lat (usec): min=174, max=521, avg=242.06, stdev=30.65 00:14:38.371 clat percentiles (usec): 00:14:38.371 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 202], 00:14:38.371 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:14:38.371 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 262], 00:14:38.371 | 99.00th=[ 318], 99.50th=[ 371], 99.90th=[ 515], 99.95th=[ 515], 00:14:38.371 | 99.99th=[ 515] 00:14:38.371 bw ( KiB/s): min= 4096, max= 4096, per=22.89%, avg=4096.00, stdev= 0.00, samples=1 00:14:38.371 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:38.371 lat (usec) : 250=85.05%, 500=10.65%, 750=0.19% 00:14:38.371 lat (msec) : 50=4.11% 00:14:38.371 cpu : usr=0.68%, sys=1.17%, ctx=536, majf=0, minf=1 00:14:38.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.371 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.371 job3: (groupid=0, jobs=1): err= 0: pid=539175: Tue Nov 26 18:10:26 2024 00:14:38.371 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:14:38.371 slat (nsec): min=9069, max=35163, avg=25829.64, stdev=9459.33 00:14:38.371 clat (usec): min=40910, max=42002, avg=41285.27, stdev=482.24 00:14:38.371 lat (usec): min=40945, max=42019, avg=41311.10, stdev=478.08 00:14:38.371 clat percentiles (usec): 00:14:38.371 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:14:38.371 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:38.371 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:38.371 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:38.371 | 99.99th=[42206] 00:14:38.371 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:14:38.371 slat (nsec): min=6647, max=37273, avg=13345.58, stdev=5468.67 00:14:38.371 clat (usec): min=168, max=359, avg=202.47, stdev=17.10 00:14:38.371 lat (usec): min=177, max=368, avg=215.81, stdev=17.99 00:14:38.371 clat percentiles (usec): 00:14:38.371 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:14:38.371 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 200], 60.00th=[ 204], 00:14:38.371 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 229], 00:14:38.371 | 99.00th=[ 243], 99.50th=[ 281], 99.90th=[ 359], 99.95th=[ 359], 00:14:38.371 | 99.99th=[ 359] 00:14:38.371 bw ( KiB/s): min= 4096, max= 4096, per=22.89%, avg=4096.00, stdev= 0.00, samples=1 00:14:38.371 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:38.371 lat (usec) : 250=94.94%, 500=0.94% 00:14:38.371 lat (msec) : 50=4.12% 00:14:38.371 cpu : usr=0.59%, sys=0.49%, ctx=536, majf=0, minf=1 00:14:38.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:38.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:38.371 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:38.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:38.371 00:14:38.371 Run status group 0 (all jobs): 00:14:38.371 READ: bw=10.9MiB/s (11.4MB/s), 86.1KiB/s-7965KiB/s (88.2kB/s-8156kB/s), io=11.2MiB (11.8MB), run=1001-1030msec 00:14:38.371 WRITE: bw=17.5MiB/s (18.3MB/s), 1990KiB/s-9942KiB/s (2038kB/s-10.2MB/s), io=18.0MiB (18.9MB), run=1001-1030msec 00:14:38.371 00:14:38.371 Disk stats (read/write): 00:14:38.371 nvme0n1: ios=2072/2163, merge=0/0, ticks=1405/328, in_queue=1733, util=99.10% 00:14:38.371 nvme0n2: ios=562/525, merge=0/0, ticks=1099/97, in_queue=1196, util=96.11% 00:14:38.371 nvme0n3: ios=49/512, merge=0/0, ticks=1210/109, in_queue=1319, util=99.04% 00:14:38.371 nvme0n4: ios=40/512, merge=0/0, ticks=1643/98, in_queue=1741, util=98.61% 00:14:38.371 18:10:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:38.371 [global] 00:14:38.371 thread=1 00:14:38.371 invalidate=1 00:14:38.371 rw=write 00:14:38.371 time_based=1 00:14:38.371 runtime=1 00:14:38.371 ioengine=libaio 00:14:38.371 direct=1 00:14:38.371 bs=4096 00:14:38.371 iodepth=128 00:14:38.371 norandommap=0 00:14:38.371 numjobs=1 00:14:38.371 00:14:38.371 verify_dump=1 00:14:38.372 verify_backlog=512 00:14:38.372 verify_state_save=0 00:14:38.372 do_verify=1 00:14:38.372 verify=crc32c-intel 00:14:38.372 [job0] 00:14:38.372 filename=/dev/nvme0n1 00:14:38.372 [job1] 00:14:38.372 filename=/dev/nvme0n2 00:14:38.372 [job2] 00:14:38.372 filename=/dev/nvme0n3 00:14:38.372 [job3] 00:14:38.372 filename=/dev/nvme0n4 00:14:38.372 Could not set queue depth (nvme0n1) 00:14:38.372 Could not set queue depth (nvme0n2) 00:14:38.372 Could not set queue depth (nvme0n3) 00:14:38.372 Could not set queue depth (nvme0n4) 00:14:38.629 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.629 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.629 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.629 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:38.629 fio-3.35 00:14:38.629 Starting 4 threads 00:14:40.003 00:14:40.003 job0: (groupid=0, jobs=1): err= 0: pid=539411: Tue Nov 26 18:10:27 2024 00:14:40.003 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:14:40.003 slat (usec): min=3, max=15315, avg=132.30, stdev=886.90 00:14:40.003 clat (usec): min=6141, max=49854, avg=15268.70, stdev=7118.99 00:14:40.003 lat (usec): min=6149, max=49861, avg=15401.00, stdev=7189.87 00:14:40.003 clat percentiles (usec): 00:14:40.003 | 1.00th=[ 7177], 5.00th=[10028], 10.00th=[10159], 20.00th=[10945], 00:14:40.003 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[13960], 00:14:40.003 | 70.00th=[15270], 80.00th=[18482], 90.00th=[23462], 95.00th=[30802], 00:14:40.003 | 99.00th=[45876], 99.50th=[46924], 99.90th=[50070], 99.95th=[50070], 00:14:40.003 | 99.99th=[50070] 00:14:40.003 write: IOPS=3422, BW=13.4MiB/s (14.0MB/s)(13.6MiB/1014msec); 0 zone resets 00:14:40.003 slat (usec): min=4, max=11945, avg=165.65, stdev=765.33 00:14:40.003 clat (usec): min=1263, max=57468, avg=23544.31, stdev=12003.28 00:14:40.003 lat (usec): min=1314, max=57478, avg=23709.95, stdev=12087.87 00:14:40.003 clat percentiles (usec): 00:14:40.003 | 1.00th=[ 4555], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10814], 00:14:40.003 | 30.00th=[14222], 40.00th=[19268], 50.00th=[21365], 60.00th=[25560], 00:14:40.003 | 70.00th=[30802], 80.00th=[35390], 90.00th=[41157], 95.00th=[44827], 00:14:40.003 | 99.00th=[48497], 99.50th=[51119], 99.90th=[57410], 99.95th=[57410], 00:14:40.003 | 99.99th=[57410] 00:14:40.003 bw ( KiB/s): min=11328, max=15408, per=24.19%, avg=13368.00, stdev=2885.00, samples=2 00:14:40.003 iops : min= 2832, max= 3852, avg=3342.00, stdev=721.25, samples=2 00:14:40.003 lat (msec) : 2=0.02%, 4=0.28%, 10=6.51%, 20=58.03%, 50=34.84% 00:14:40.003 lat (msec) : 100=0.34% 00:14:40.003 cpu : usr=1.78%, sys=5.53%, ctx=356, majf=0, minf=1 00:14:40.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:14:40.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.003 issued rwts: total=3072,3470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.003 job1: (groupid=0, jobs=1): err= 0: pid=539419: Tue Nov 26 18:10:27 2024 00:14:40.003 read: IOPS=4809, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1008msec) 00:14:40.003 slat (usec): min=2, max=11070, avg=86.67, stdev=595.35 00:14:40.003 clat (usec): min=3414, max=25150, avg=11197.37, stdev=3051.39 00:14:40.003 lat (usec): min=3422, max=25223, avg=11284.04, stdev=3082.85 00:14:40.003 clat percentiles (usec): 00:14:40.003 | 1.00th=[ 5669], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 8848], 00:14:40.003 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11731], 00:14:40.003 | 70.00th=[12780], 80.00th=[13566], 90.00th=[15139], 95.00th=[16909], 00:14:40.003 | 99.00th=[20841], 99.50th=[21627], 99.90th=[25035], 99.95th=[25035], 00:14:40.003 | 99.99th=[25035] 00:14:40.003 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:14:40.003 slat (usec): min=3, max=10713, avg=103.80, stdev=559.61 00:14:40.003 clat (usec): min=322, max=59846, avg=14358.35, stdev=12156.71 00:14:40.003 lat (usec): min=336, max=59866, avg=14462.15, stdev=12242.94 00:14:40.003 clat percentiles (usec): 00:14:40.003 | 1.00th=[ 2737], 5.00th=[ 4555], 10.00th=[ 6390], 20.00th=[ 8455], 00:14:40.003 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10421], 00:14:40.003 | 70.00th=[11994], 80.00th=[13173], 90.00th=[37487], 95.00th=[45876], 00:14:40.003 | 99.00th=[56886], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:14:40.003 | 99.99th=[60031] 00:14:40.003 bw ( KiB/s): min=13168, max=27792, per=37.05%, avg=20480.00, stdev=10340.73, samples=2 00:14:40.003 iops : min= 3292, max= 6948, avg=5120.00, stdev=2585.18, samples=2 00:14:40.003 lat (usec) : 500=0.02%, 1000=0.05% 00:14:40.003 lat (msec) : 2=0.19%, 4=1.81%, 10=43.94%, 20=45.56%, 50=6.55% 00:14:40.003 lat (msec) : 100=1.89% 00:14:40.003 cpu : usr=5.66%, sys=10.53%, ctx=538, majf=0, minf=1 00:14:40.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:40.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.003 issued rwts: total=4848,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.003 job2: (groupid=0, jobs=1): err= 0: pid=539447: Tue Nov 26 18:10:27 2024 00:14:40.003 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:14:40.003 slat (usec): min=2, max=54086, avg=265.55, stdev=2340.62 00:14:40.003 clat (msec): min=10, max=115, avg=29.49, stdev=27.26 00:14:40.003 lat (msec): min=10, max=115, avg=29.76, stdev=27.45 00:14:40.004 clat percentiles (msec): 00:14:40.004 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 16], 00:14:40.004 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:14:40.004 | 70.00th=[ 23], 80.00th=[ 36], 90.00th=[ 72], 95.00th=[ 105], 00:14:40.004 | 99.00th=[ 115], 99.50th=[ 115], 99.90th=[ 115], 99.95th=[ 115], 00:14:40.004 | 99.99th=[ 115] 00:14:40.004 write: IOPS=2332, BW=9331KiB/s (9555kB/s)(9396KiB/1007msec); 0 zone resets 00:14:40.004 slat (usec): min=3, max=15108, avg=187.63, stdev=965.23 00:14:40.004 clat (msec): min=3, max=117, avg=28.59, stdev=20.20 00:14:40.004 lat (msec): min=5, max=117, avg=28.78, stdev=20.26 00:14:40.004 clat percentiles (msec): 00:14:40.004 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 13], 20.00th=[ 15], 00:14:40.004 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 24], 00:14:40.004 | 70.00th=[ 28], 80.00th=[ 44], 90.00th=[ 54], 95.00th=[ 61], 00:14:40.004 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 117], 99.95th=[ 117], 00:14:40.004 | 99.99th=[ 117] 00:14:40.004 bw ( KiB/s): min= 5480, max=12288, per=16.07%, avg=8884.00, stdev=4813.98, samples=2 00:14:40.004 iops : min= 1370, max= 3072, avg=2221.00, stdev=1203.50, samples=2 00:14:40.004 lat (msec) : 4=0.02%, 10=4.23%, 20=49.74%, 50=31.13%, 100=10.76% 00:14:40.004 lat (msec) : 250=4.12% 00:14:40.004 cpu : usr=2.88%, sys=2.98%, ctx=261, majf=0, minf=1 00:14:40.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:40.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.004 issued rwts: total=2048,2349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.004 job3: (groupid=0, jobs=1): err= 0: pid=539460: Tue Nov 26 18:10:27 2024 00:14:40.004 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1008msec) 00:14:40.004 slat (usec): min=2, max=13917, avg=140.50, stdev=823.80 00:14:40.004 clat (usec): min=6595, max=63366, avg=18455.56, stdev=8856.69 00:14:40.004 lat (usec): min=6606, max=63381, avg=18596.06, stdev=8933.47 00:14:40.004 clat percentiles (usec): 00:14:40.004 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[12649], 20.00th=[13173], 00:14:40.004 | 30.00th=[13566], 40.00th=[14353], 50.00th=[14746], 60.00th=[15926], 00:14:40.004 | 70.00th=[18482], 80.00th=[23462], 90.00th=[31327], 95.00th=[35390], 00:14:40.004 | 99.00th=[57934], 99.50th=[58983], 99.90th=[58983], 99.95th=[62129], 00:14:40.004 | 99.99th=[63177] 00:14:40.004 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:14:40.004 slat (usec): min=3, max=23202, avg=177.49, stdev=1248.57 00:14:40.004 clat (msec): min=6, max=101, avg=22.29, stdev=17.79 00:14:40.004 lat (msec): min=6, max=101, avg=22.47, stdev=17.93 00:14:40.004 clat percentiles (msec): 00:14:40.004 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:14:40.004 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:14:40.004 | 70.00th=[ 18], 80.00th=[ 37], 90.00th=[ 50], 95.00th=[ 54], 00:14:40.004 | 99.00th=[ 102], 99.50th=[ 102], 99.90th=[ 102], 99.95th=[ 102], 00:14:40.004 | 99.99th=[ 102] 00:14:40.004 bw ( KiB/s): min= 8192, max=16384, per=22.23%, avg=12288.00, stdev=5792.62, samples=2 00:14:40.004 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:14:40.004 lat (msec) : 10=3.25%, 20=71.41%, 50=19.47%, 100=5.36%, 250=0.51% 00:14:40.004 cpu : usr=3.48%, sys=5.16%, ctx=212, majf=0, minf=1 00:14:40.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:40.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:40.004 issued rwts: total=3045,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:40.004 00:14:40.004 Run status group 0 (all jobs): 00:14:40.004 READ: bw=50.1MiB/s (52.6MB/s), 8135KiB/s-18.8MiB/s (8330kB/s-19.7MB/s), io=50.8MiB (53.3MB), run=1007-1014msec 00:14:40.004 WRITE: bw=54.0MiB/s (56.6MB/s), 9331KiB/s-19.8MiB/s (9555kB/s-20.8MB/s), io=54.7MiB (57.4MB), run=1007-1014msec 00:14:40.004 00:14:40.004 Disk stats (read/write): 00:14:40.004 nvme0n1: ios=2607/3015, merge=0/0, ticks=37708/65689, in_queue=103397, util=98.10% 00:14:40.004 nvme0n2: ios=3624/4096, merge=0/0, ticks=36453/61656, in_queue=98109, util=96.14% 00:14:40.004 nvme0n3: ios=1830/2048, merge=0/0, ticks=18421/23371, in_queue=41792, util=88.92% 00:14:40.004 nvme0n4: ios=2582/2929, merge=0/0, ticks=17009/26485, in_queue=43494, util=98.00% 00:14:40.004 18:10:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:40.004 [global] 00:14:40.004 thread=1 00:14:40.004 invalidate=1 00:14:40.004 rw=randwrite 00:14:40.004 time_based=1 00:14:40.004 runtime=1 00:14:40.004 ioengine=libaio 00:14:40.004 direct=1 00:14:40.004 bs=4096 00:14:40.004 iodepth=128 00:14:40.004 norandommap=0 00:14:40.004 numjobs=1 00:14:40.004 00:14:40.004 verify_dump=1 00:14:40.004 verify_backlog=512 00:14:40.004 verify_state_save=0 00:14:40.004 do_verify=1 00:14:40.004 verify=crc32c-intel 00:14:40.004 [job0] 00:14:40.004 filename=/dev/nvme0n1 00:14:40.004 [job1] 00:14:40.004 filename=/dev/nvme0n2 00:14:40.004 [job2] 00:14:40.004 filename=/dev/nvme0n3 00:14:40.004 [job3] 00:14:40.004 filename=/dev/nvme0n4 00:14:40.004 Could not set queue depth (nvme0n1) 00:14:40.004 Could not set queue depth (nvme0n2) 00:14:40.004 Could not set queue depth (nvme0n3) 00:14:40.004 Could not set queue depth (nvme0n4) 00:14:40.004 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.004 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.004 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.004 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:40.004 fio-3.35 00:14:40.004 Starting 4 threads 00:14:41.384 00:14:41.384 job0: (groupid=0, jobs=1): err= 0: pid=539753: Tue Nov 26 18:10:29 2024 00:14:41.384 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:14:41.384 slat (usec): min=2, max=16564, avg=174.27, stdev=1130.54 00:14:41.384 clat (usec): min=11147, max=47791, avg=22599.54, stdev=5597.72 00:14:41.384 lat (usec): min=11153, max=47824, avg=22773.81, stdev=5699.25 00:14:41.384 clat percentiles (usec): 00:14:41.384 | 1.00th=[13698], 5.00th=[14484], 10.00th=[15664], 20.00th=[17171], 00:14:41.384 | 30.00th=[19268], 40.00th=[21103], 50.00th=[22938], 60.00th=[23200], 00:14:41.384 | 70.00th=[24249], 80.00th=[27657], 90.00th=[29492], 95.00th=[32900], 00:14:41.384 | 99.00th=[37487], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:14:41.384 | 99.99th=[47973] 00:14:41.384 write: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1010msec); 0 zone resets 00:14:41.384 slat (usec): min=3, max=10296, avg=173.04, stdev=846.54 00:14:41.384 clat (usec): min=6169, max=59594, avg=23077.08, stdev=9955.49 00:14:41.384 lat (usec): min=6181, max=59602, avg=23250.12, stdev=10038.48 00:14:41.384 clat percentiles (usec): 00:14:41.384 | 1.00th=[10421], 5.00th=[10945], 10.00th=[13698], 20.00th=[14484], 00:14:41.384 | 30.00th=[17433], 40.00th=[18744], 50.00th=[22414], 60.00th=[23462], 00:14:41.384 | 70.00th=[23987], 80.00th=[26346], 90.00th=[36439], 95.00th=[47449], 00:14:41.384 | 99.00th=[54264], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:14:41.384 | 99.99th=[59507] 00:14:41.384 bw ( KiB/s): min=10944, max=12288, per=17.61%, avg=11616.00, stdev=950.35, samples=2 00:14:41.384 iops : min= 2736, max= 3072, avg=2904.00, stdev=237.59, samples=2 00:14:41.384 lat (msec) : 10=0.41%, 20=38.17%, 50=59.61%, 100=1.81% 00:14:41.384 cpu : usr=4.86%, sys=4.56%, ctx=275, majf=0, minf=1 00:14:41.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:41.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.384 issued rwts: total=2560,3031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.384 job1: (groupid=0, jobs=1): err= 0: pid=539754: Tue Nov 26 18:10:29 2024 00:14:41.384 read: IOPS=5569, BW=21.8MiB/s (22.8MB/s)(21.9MiB/1008msec) 00:14:41.384 slat (usec): min=2, max=10378, avg=91.92, stdev=640.00 00:14:41.384 clat (usec): min=2850, max=22076, avg=11954.96, stdev=2881.85 00:14:41.384 lat (usec): min=4062, max=22085, avg=12046.88, stdev=2919.16 00:14:41.384 clat percentiles (usec): 00:14:41.384 | 1.00th=[ 5145], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10552], 00:14:41.384 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:14:41.384 | 70.00th=[11994], 80.00th=[13304], 90.00th=[16581], 95.00th=[18482], 00:14:41.384 | 99.00th=[20841], 99.50th=[21365], 99.90th=[22152], 99.95th=[22152], 00:14:41.384 | 99.99th=[22152] 00:14:41.384 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:14:41.384 slat (usec): min=3, max=9071, avg=69.88, stdev=328.17 00:14:41.384 clat (usec): min=939, max=45160, avg=10704.98, stdev=3018.36 00:14:41.384 lat (usec): min=947, max=45167, avg=10774.86, stdev=3039.74 00:14:41.384 clat percentiles (usec): 00:14:41.384 | 1.00th=[ 2835], 5.00th=[ 5145], 10.00th=[ 6652], 20.00th=[ 8848], 00:14:41.384 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:14:41.384 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12518], 00:14:41.384 | 99.00th=[18482], 99.50th=[26346], 99.90th=[39584], 99.95th=[39584], 00:14:41.384 | 99.99th=[45351] 00:14:41.384 bw ( KiB/s): min=21472, max=23584, per=34.14%, avg=22528.00, stdev=1493.41, samples=2 00:14:41.384 iops : min= 5368, max= 5896, avg=5632.00, stdev=373.35, samples=2 00:14:41.384 lat (usec) : 1000=0.06% 00:14:41.384 lat (msec) : 2=0.20%, 4=1.00%, 10=18.00%, 20=79.19%, 50=1.55% 00:14:41.384 cpu : usr=6.55%, sys=10.23%, ctx=650, majf=0, minf=1 00:14:41.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:41.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.384 issued rwts: total=5614,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.384 job2: (groupid=0, jobs=1): err= 0: pid=539755: Tue Nov 26 18:10:29 2024 00:14:41.384 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:14:41.384 slat (usec): min=2, max=18789, avg=171.32, stdev=1102.43 00:14:41.384 clat (usec): min=10383, max=51211, avg=20877.77, stdev=5599.77 00:14:41.384 lat (usec): min=10396, max=51229, avg=21049.09, stdev=5713.59 00:14:41.384 clat percentiles (usec): 00:14:41.384 | 1.00th=[12518], 5.00th=[14091], 10.00th=[15401], 20.00th=[16450], 00:14:41.384 | 30.00th=[17433], 40.00th=[18744], 50.00th=[19530], 60.00th=[20841], 00:14:41.384 | 70.00th=[22938], 80.00th=[24773], 90.00th=[28705], 95.00th=[31851], 00:14:41.384 | 99.00th=[39060], 99.50th=[45351], 99.90th=[51119], 99.95th=[51119], 00:14:41.384 | 99.99th=[51119] 00:14:41.384 write: IOPS=2953, BW=11.5MiB/s (12.1MB/s)(11.7MiB/1010msec); 0 zone resets 00:14:41.384 slat (usec): min=4, max=14673, avg=179.51, stdev=951.65 00:14:41.384 clat (usec): min=8991, max=65306, avg=24712.43, stdev=9728.27 00:14:41.384 lat (usec): min=10234, max=65317, avg=24891.94, stdev=9815.69 00:14:41.384 clat percentiles (usec): 00:14:41.384 | 1.00th=[11863], 5.00th=[14615], 10.00th=[14877], 20.00th=[16188], 00:14:41.384 | 30.00th=[21103], 40.00th=[22152], 50.00th=[23200], 60.00th=[23725], 00:14:41.384 | 70.00th=[24773], 80.00th=[27919], 90.00th=[40633], 95.00th=[46400], 00:14:41.384 | 99.00th=[56886], 99.50th=[60031], 99.90th=[65274], 99.95th=[65274], 00:14:41.384 | 99.99th=[65274] 00:14:41.384 bw ( KiB/s): min=10560, max=12288, per=17.31%, avg=11424.00, stdev=1221.88, samples=2 00:14:41.384 iops : min= 2640, max= 3072, avg=2856.00, stdev=305.47, samples=2 00:14:41.384 lat (msec) : 10=0.02%, 20=40.48%, 50=57.26%, 100=2.24% 00:14:41.384 cpu : usr=2.78%, sys=5.75%, ctx=265, majf=0, minf=1 00:14:41.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:14:41.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.384 issued rwts: total=2560,2983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.384 job3: (groupid=0, jobs=1): err= 0: pid=539756: Tue Nov 26 18:10:29 2024 00:14:41.384 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:14:41.384 slat (usec): min=2, max=4837, avg=99.61, stdev=526.64 00:14:41.384 clat (usec): min=9376, max=19685, avg=13113.53, stdev=1108.91 00:14:41.384 lat (usec): min=9408, max=19724, avg=13213.13, stdev=1159.88 00:14:41.384 clat percentiles (usec): 00:14:41.384 | 1.00th=[ 9896], 5.00th=[11207], 10.00th=[12256], 20.00th=[12649], 00:14:41.384 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:14:41.384 | 70.00th=[13304], 80.00th=[13435], 90.00th=[14484], 95.00th=[15533], 00:14:41.384 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17695], 99.95th=[17695], 00:14:41.384 | 99.99th=[19792] 00:14:41.384 write: IOPS=4989, BW=19.5MiB/s (20.4MB/s)(19.6MiB/1005msec); 0 zone resets 00:14:41.384 slat (usec): min=4, max=11563, avg=98.22, stdev=520.08 00:14:41.384 clat (usec): min=3725, max=26357, avg=13215.69, stdev=1852.03 00:14:41.384 lat (usec): min=4450, max=26426, avg=13313.92, stdev=1875.75 00:14:41.384 clat percentiles (usec): 00:14:41.384 | 1.00th=[ 8848], 5.00th=[10814], 10.00th=[12256], 20.00th=[12518], 00:14:41.384 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:14:41.384 | 70.00th=[13304], 80.00th=[13566], 90.00th=[15139], 95.00th=[16188], 00:14:41.384 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22414], 99.95th=[22414], 00:14:41.384 | 99.99th=[26346] 00:14:41.384 bw ( KiB/s): min=18616, max=20480, per=29.63%, avg=19548.00, stdev=1318.05, samples=2 00:14:41.384 iops : min= 4654, max= 5120, avg=4887.00, stdev=329.51, samples=2 00:14:41.384 lat (msec) : 4=0.01%, 10=1.93%, 20=97.17%, 50=0.88% 00:14:41.384 cpu : usr=6.67%, sys=9.36%, ctx=434, majf=0, minf=1 00:14:41.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:41.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:41.384 issued rwts: total=4608,5014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:41.384 00:14:41.384 Run status group 0 (all jobs): 00:14:41.384 READ: bw=59.3MiB/s (62.2MB/s), 9.90MiB/s-21.8MiB/s (10.4MB/s-22.8MB/s), io=59.9MiB (62.8MB), run=1005-1010msec 00:14:41.384 WRITE: bw=64.4MiB/s (67.6MB/s), 11.5MiB/s-21.8MiB/s (12.1MB/s-22.9MB/s), io=65.1MiB (68.2MB), run=1005-1010msec 00:14:41.384 00:14:41.384 Disk stats (read/write): 00:14:41.384 nvme0n1: ios=2201/2560, merge=0/0, ticks=24089/27960, in_queue=52049, util=98.60% 00:14:41.384 nvme0n2: ios=4648/4919, merge=0/0, ticks=50166/46837, in_queue=97003, util=97.46% 00:14:41.384 nvme0n3: ios=2135/2560, merge=0/0, ticks=22019/29416, in_queue=51435, util=97.29% 00:14:41.384 nvme0n4: ios=4067/4096, merge=0/0, ticks=17128/16951, in_queue=34079, util=90.68% 00:14:41.384 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:41.384 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=539894 00:14:41.384 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:41.384 18:10:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:41.384 [global] 00:14:41.384 thread=1 00:14:41.384 invalidate=1 00:14:41.384 rw=read 00:14:41.384 time_based=1 00:14:41.384 runtime=10 00:14:41.384 ioengine=libaio 00:14:41.384 direct=1 00:14:41.384 bs=4096 00:14:41.384 iodepth=1 00:14:41.384 norandommap=1 00:14:41.384 numjobs=1 00:14:41.384 00:14:41.384 [job0] 00:14:41.384 filename=/dev/nvme0n1 00:14:41.384 [job1] 00:14:41.384 filename=/dev/nvme0n2 00:14:41.384 [job2] 00:14:41.384 filename=/dev/nvme0n3 00:14:41.384 [job3] 00:14:41.384 filename=/dev/nvme0n4 00:14:41.384 Could not set queue depth (nvme0n1) 00:14:41.384 Could not set queue depth (nvme0n2) 00:14:41.384 Could not set queue depth (nvme0n3) 00:14:41.384 Could not set queue depth (nvme0n4) 00:14:41.384 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:41.384 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:41.384 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:41.384 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:41.384 fio-3.35 00:14:41.384 Starting 4 threads 00:14:44.664 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:44.664 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:44.664 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10014720, buflen=4096 00:14:44.664 fio: pid=539993, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:44.923 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:44.923 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:44.923 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=385024, buflen=4096 00:14:44.923 fio: pid=539990, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:45.181 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:45.181 18:10:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:45.181 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=499712, buflen=4096 00:14:45.181 fio: pid=539988, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:45.439 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:45.439 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:45.439 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=36093952, buflen=4096 00:14:45.439 fio: pid=539989, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:14:45.439 00:14:45.439 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=539988: Tue Nov 26 18:10:33 2024 00:14:45.439 read: IOPS=34, BW=139KiB/s (142kB/s)(488KiB/3521msec) 00:14:45.439 slat (usec): min=13, max=13906, avg=243.85, stdev=1700.81 00:14:45.439 clat (usec): min=254, max=44004, avg=28411.05, stdev=19264.72 00:14:45.439 lat (usec): min=275, max=56009, avg=28656.59, stdev=19491.71 00:14:45.439 clat percentiles (usec): 00:14:45.439 | 1.00th=[ 285], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 457], 00:14:45.439 | 30.00th=[ 502], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:14:45.439 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:45.439 | 99.00th=[42206], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:14:45.439 | 99.99th=[43779] 00:14:45.439 bw ( KiB/s): min= 104, max= 240, per=1.21%, avg=146.67, stdev=49.75, samples=6 00:14:45.439 iops : min= 26, max= 60, avg=36.67, stdev=12.44, samples=6 00:14:45.439 lat (usec) : 500=29.27%, 750=0.81%, 1000=1.63% 00:14:45.439 lat (msec) : 50=67.48% 00:14:45.439 cpu : usr=0.20%, sys=0.00%, ctx=127, majf=0, minf=1 00:14:45.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.439 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.439 issued rwts: total=123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:45.439 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=539989: Tue Nov 26 18:10:33 2024 00:14:45.439 read: IOPS=2320, BW=9281KiB/s (9503kB/s)(34.4MiB/3798msec) 00:14:45.439 slat (usec): min=4, max=15758, avg=14.22, stdev=238.21 00:14:45.439 clat (usec): min=166, max=42194, avg=410.94, stdev=2708.07 00:14:45.439 lat (usec): min=172, max=42201, avg=425.16, stdev=2719.38 00:14:45.439 clat percentiles (usec): 00:14:45.439 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:14:45.439 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 223], 00:14:45.439 | 70.00th=[ 245], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 322], 00:14:45.439 | 99.00th=[ 537], 99.50th=[ 652], 99.90th=[42206], 99.95th=[42206], 00:14:45.439 | 99.99th=[42206] 00:14:45.439 bw ( KiB/s): min= 96, max=16660, per=72.62%, avg=8775.43, stdev=6249.44, samples=7 00:14:45.439 iops : min= 24, max= 4165, avg=2193.86, stdev=1562.36, samples=7 00:14:45.439 lat (usec) : 250=71.91%, 500=26.76%, 750=0.84% 00:14:45.439 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01%, 50=0.43% 00:14:45.439 cpu : usr=1.26%, sys=3.13%, ctx=8821, majf=0, minf=2 00:14:45.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.439 issued rwts: total=8813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:45.439 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=539990: Tue Nov 26 18:10:33 2024 00:14:45.439 read: IOPS=29, BW=117KiB/s (120kB/s)(376KiB/3216msec) 00:14:45.439 slat (nsec): min=5651, max=53911, avg=20971.74, stdev=10763.86 00:14:45.439 clat (usec): min=245, max=42206, avg=33931.64, stdev=15861.66 00:14:45.439 lat (usec): min=253, max=42223, avg=33952.68, stdev=15867.86 00:14:45.439 clat percentiles (usec): 00:14:45.439 | 1.00th=[ 247], 5.00th=[ 273], 10.00th=[ 445], 20.00th=[40633], 00:14:45.439 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:14:45.439 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:45.439 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:45.439 | 99.99th=[42206] 00:14:45.439 bw ( KiB/s): min= 96, max= 224, per=0.98%, avg=118.67, stdev=51.70, samples=6 00:14:45.439 iops : min= 24, max= 56, avg=29.67, stdev=12.93, samples=6 00:14:45.439 lat (usec) : 250=2.11%, 500=15.79% 00:14:45.439 lat (msec) : 50=81.05% 00:14:45.439 cpu : usr=0.09%, sys=0.00%, ctx=95, majf=0, minf=1 00:14:45.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.439 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.439 issued rwts: total=95,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:45.439 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=539993: Tue Nov 26 18:10:33 2024 00:14:45.439 read: IOPS=834, BW=3337KiB/s (3417kB/s)(9780KiB/2931msec) 00:14:45.439 slat (nsec): min=5110, max=64681, avg=18823.46, stdev=10736.01 00:14:45.439 clat (usec): min=188, max=42096, avg=1165.42, stdev=5877.23 00:14:45.439 lat (usec): min=199, max=42113, avg=1184.25, stdev=5878.14 00:14:45.439 clat percentiles (usec): 00:14:45.439 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 251], 00:14:45.439 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 314], 00:14:45.439 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 453], 95.00th=[ 494], 00:14:45.439 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:45.439 | 99.99th=[42206] 00:14:45.439 bw ( KiB/s): min= 96, max= 6816, per=19.30%, avg=2332.80, stdev=3165.02, samples=5 00:14:45.439 iops : min= 24, max= 1704, avg=583.20, stdev=791.25, samples=5 00:14:45.439 lat (usec) : 250=19.79%, 500=76.17%, 750=1.92% 00:14:45.439 lat (msec) : 50=2.09% 00:14:45.439 cpu : usr=0.85%, sys=1.57%, ctx=2447, majf=0, minf=2 00:14:45.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.439 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.439 issued rwts: total=2446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:45.439 00:14:45.439 Run status group 0 (all jobs): 00:14:45.439 READ: bw=11.8MiB/s (12.4MB/s), 117KiB/s-9281KiB/s (120kB/s-9503kB/s), io=44.8MiB (47.0MB), run=2931-3798msec 00:14:45.439 00:14:45.439 Disk stats (read/write): 00:14:45.439 nvme0n1: ios=159/0, merge=0/0, ticks=4338/0, in_queue=4338, util=99.77% 00:14:45.439 nvme0n2: ios=8095/0, merge=0/0, ticks=3408/0, in_queue=3408, util=95.69% 00:14:45.439 nvme0n3: ios=91/0, merge=0/0, ticks=3068/0, in_queue=3068, util=96.79% 00:14:45.439 nvme0n4: ios=2313/0, merge=0/0, ticks=3752/0, in_queue=3752, util=99.86% 00:14:45.698 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:45.698 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:45.956 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:45.957 18:10:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:46.215 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.215 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:46.473 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:46.473 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:46.731 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:46.731 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 539894 00:14:46.731 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:46.731 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:46.989 nvmf hotplug test: fio failed as expected 00:14:46.989 18:10:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.246 rmmod nvme_tcp 00:14:47.246 rmmod nvme_fabrics 00:14:47.246 rmmod nvme_keyring 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 537851 ']' 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 537851 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 537851 ']' 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 537851 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 537851 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 537851' 00:14:47.246 killing process with pid 537851 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 537851 00:14:47.246 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 537851 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.506 18:10:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.040 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:50.040 00:14:50.040 real 0m24.499s 00:14:50.040 user 1m25.922s 00:14:50.040 sys 0m6.570s 00:14:50.040 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.040 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.040 ************************************ 00:14:50.040 END TEST nvmf_fio_target 00:14:50.040 ************************************ 00:14:50.040 18:10:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:50.040 18:10:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.040 18:10:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.040 18:10:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:50.040 ************************************ 00:14:50.040 START TEST nvmf_bdevio 00:14:50.040 ************************************ 00:14:50.040 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:50.041 * Looking for test storage... 00:14:50.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:50.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.041 --rc genhtml_branch_coverage=1 00:14:50.041 --rc genhtml_function_coverage=1 00:14:50.041 --rc genhtml_legend=1 00:14:50.041 --rc geninfo_all_blocks=1 00:14:50.041 --rc geninfo_unexecuted_blocks=1 00:14:50.041 00:14:50.041 ' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:50.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.041 --rc genhtml_branch_coverage=1 00:14:50.041 --rc genhtml_function_coverage=1 00:14:50.041 --rc genhtml_legend=1 00:14:50.041 --rc geninfo_all_blocks=1 00:14:50.041 --rc geninfo_unexecuted_blocks=1 00:14:50.041 00:14:50.041 ' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:50.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.041 --rc genhtml_branch_coverage=1 00:14:50.041 --rc genhtml_function_coverage=1 00:14:50.041 --rc genhtml_legend=1 00:14:50.041 --rc geninfo_all_blocks=1 00:14:50.041 --rc geninfo_unexecuted_blocks=1 00:14:50.041 00:14:50.041 ' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:50.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.041 --rc genhtml_branch_coverage=1 00:14:50.041 --rc genhtml_function_coverage=1 00:14:50.041 --rc genhtml_legend=1 00:14:50.041 --rc geninfo_all_blocks=1 00:14:50.041 --rc geninfo_unexecuted_blocks=1 00:14:50.041 00:14:50.041 ' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.041 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.042 18:10:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:51.942 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:51.942 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.942 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:51.943 Found net devices under 0000:09:00.0: cvl_0_0 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:51.943 Found net devices under 0000:09:00.1: cvl_0_1 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.943 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.201 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.201 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.201 18:10:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:52.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:14:52.201 00:14:52.201 --- 10.0.0.2 ping statistics --- 00:14:52.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.201 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:14:52.201 00:14:52.201 --- 10.0.0.1 ping statistics --- 00:14:52.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.201 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=542654 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 542654 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 542654 ']' 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.201 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.202 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.202 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.202 [2024-11-26 18:10:40.139625] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:14:52.202 [2024-11-26 18:10:40.139711] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.459 [2024-11-26 18:10:40.215311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.459 [2024-11-26 18:10:40.273879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.459 [2024-11-26 18:10:40.273927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.459 [2024-11-26 18:10:40.273955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.459 [2024-11-26 18:10:40.273966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.459 [2024-11-26 18:10:40.273976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.459 [2024-11-26 18:10:40.275579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:52.459 [2024-11-26 18:10:40.275706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:52.459 [2024-11-26 18:10:40.275770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:52.459 [2024-11-26 18:10:40.275773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.459 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.459 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.460 [2024-11-26 18:10:40.428256] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.460 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.718 Malloc0 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:52.718 [2024-11-26 18:10:40.492855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:52.718 { 00:14:52.718 "params": { 00:14:52.718 "name": "Nvme$subsystem", 00:14:52.718 "trtype": "$TEST_TRANSPORT", 00:14:52.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:52.718 "adrfam": "ipv4", 00:14:52.718 "trsvcid": "$NVMF_PORT", 00:14:52.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:52.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:52.718 "hdgst": ${hdgst:-false}, 00:14:52.718 "ddgst": ${ddgst:-false} 00:14:52.718 }, 00:14:52.718 "method": "bdev_nvme_attach_controller" 00:14:52.718 } 00:14:52.718 EOF 00:14:52.718 )") 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:14:52.718 18:10:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:52.718 "params": { 00:14:52.718 "name": "Nvme1", 00:14:52.718 "trtype": "tcp", 00:14:52.718 "traddr": "10.0.0.2", 00:14:52.718 "adrfam": "ipv4", 00:14:52.718 "trsvcid": "4420", 00:14:52.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.718 "hdgst": false, 00:14:52.718 "ddgst": false 00:14:52.718 }, 00:14:52.718 "method": "bdev_nvme_attach_controller" 00:14:52.718 }' 00:14:52.718 [2024-11-26 18:10:40.541773] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:14:52.718 [2024-11-26 18:10:40.541843] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542781 ] 00:14:52.718 [2024-11-26 18:10:40.611730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.718 [2024-11-26 18:10:40.677908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.718 [2024-11-26 18:10:40.677962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.718 [2024-11-26 18:10:40.677966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.284 I/O targets: 00:14:53.284 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:53.284 00:14:53.284 00:14:53.284 CUnit - A unit testing framework for C - Version 2.1-3 00:14:53.284 http://cunit.sourceforge.net/ 00:14:53.284 00:14:53.284 00:14:53.284 Suite: bdevio tests on: Nvme1n1 00:14:53.284 Test: blockdev write read block ...passed 00:14:53.284 Test: blockdev write zeroes read block ...passed 00:14:53.284 Test: blockdev write zeroes read no split ...passed 00:14:53.284 Test: blockdev write zeroes read split ...passed 00:14:53.284 Test: blockdev write zeroes read split partial ...passed 00:14:53.284 Test: blockdev reset ...[2024-11-26 18:10:41.111577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:53.284 [2024-11-26 18:10:41.111684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1374cb0 (9): Bad file descriptor 00:14:53.284 [2024-11-26 18:10:41.164890] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:53.284 passed 00:14:53.284 Test: blockdev write read 8 blocks ...passed 00:14:53.284 Test: blockdev write read size > 128k ...passed 00:14:53.284 Test: blockdev write read invalid size ...passed 00:14:53.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:53.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:53.284 Test: blockdev write read max offset ...passed 00:14:53.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:53.543 Test: blockdev writev readv 8 blocks ...passed 00:14:53.543 Test: blockdev writev readv 30 x 1block ...passed 00:14:53.543 Test: blockdev writev readv block ...passed 00:14:53.543 Test: blockdev writev readv size > 128k ...passed 00:14:53.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:53.543 Test: blockdev comparev and writev ...[2024-11-26 18:10:41.419506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.543 [2024-11-26 18:10:41.419542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.419566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.543 [2024-11-26 18:10:41.419584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.419979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.543 [2024-11-26 18:10:41.420004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.420026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.543 [2024-11-26 18:10:41.420041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.420422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.543 [2024-11-26 18:10:41.420447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.420468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.543 [2024-11-26 18:10:41.420484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.420784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.543 [2024-11-26 18:10:41.420808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.420829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:53.543 [2024-11-26 18:10:41.420854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:53.543 passed 00:14:53.543 Test: blockdev nvme passthru rw ...passed 00:14:53.543 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:10:41.503561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.543 [2024-11-26 18:10:41.503588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.503728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.543 [2024-11-26 18:10:41.503751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.503886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.543 [2024-11-26 18:10:41.503908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:53.543 [2024-11-26 18:10:41.504039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:53.543 [2024-11-26 18:10:41.504062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:53.543 passed 00:14:53.543 Test: blockdev nvme admin passthru ...passed 00:14:53.801 Test: blockdev copy ...passed 00:14:53.801 00:14:53.801 Run Summary: Type Total Ran Passed Failed Inactive 00:14:53.802 suites 1 1 n/a 0 0 00:14:53.802 tests 23 23 23 0 0 00:14:53.802 asserts 152 152 152 0 n/a 00:14:53.802 00:14:53.802 Elapsed time = 1.139 seconds 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:53.802 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:53.802 rmmod nvme_tcp 00:14:53.802 rmmod nvme_fabrics 00:14:53.802 rmmod nvme_keyring 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 542654 ']' 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 542654 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 542654 ']' 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 542654 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542654 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542654' 00:14:54.060 killing process with pid 542654 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 542654 00:14:54.060 18:10:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 542654 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.318 18:10:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.221 18:10:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:56.221 00:14:56.221 real 0m6.621s 00:14:56.221 user 0m10.771s 00:14:56.221 sys 0m2.251s 00:14:56.221 18:10:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.221 18:10:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:56.221 ************************************ 00:14:56.221 END TEST nvmf_bdevio 00:14:56.221 ************************************ 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:56.479 00:14:56.479 real 3m56.246s 00:14:56.479 user 10m15.853s 00:14:56.479 sys 1m7.499s 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:56.479 ************************************ 00:14:56.479 END TEST nvmf_target_core 00:14:56.479 ************************************ 00:14:56.479 18:10:44 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:56.479 18:10:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.479 18:10:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.479 18:10:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.479 ************************************ 00:14:56.479 START TEST nvmf_target_extra 00:14:56.479 ************************************ 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:56.479 * Looking for test storage... 00:14:56.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.479 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.480 --rc genhtml_branch_coverage=1 00:14:56.480 --rc genhtml_function_coverage=1 00:14:56.480 --rc genhtml_legend=1 00:14:56.480 --rc geninfo_all_blocks=1 00:14:56.480 --rc geninfo_unexecuted_blocks=1 00:14:56.480 00:14:56.480 ' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.480 --rc genhtml_branch_coverage=1 00:14:56.480 --rc genhtml_function_coverage=1 00:14:56.480 --rc genhtml_legend=1 00:14:56.480 --rc geninfo_all_blocks=1 00:14:56.480 --rc geninfo_unexecuted_blocks=1 00:14:56.480 00:14:56.480 ' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.480 --rc genhtml_branch_coverage=1 00:14:56.480 --rc genhtml_function_coverage=1 00:14:56.480 --rc genhtml_legend=1 00:14:56.480 --rc geninfo_all_blocks=1 00:14:56.480 --rc geninfo_unexecuted_blocks=1 00:14:56.480 00:14:56.480 ' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.480 --rc genhtml_branch_coverage=1 00:14:56.480 --rc genhtml_function_coverage=1 00:14:56.480 --rc genhtml_legend=1 00:14:56.480 --rc geninfo_all_blocks=1 00:14:56.480 --rc geninfo_unexecuted_blocks=1 00:14:56.480 00:14:56.480 ' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.480 ************************************ 00:14:56.480 START TEST nvmf_example 00:14:56.480 ************************************ 00:14:56.480 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:56.739 * Looking for test storage... 00:14:56.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.739 --rc genhtml_branch_coverage=1 00:14:56.739 --rc genhtml_function_coverage=1 00:14:56.739 --rc genhtml_legend=1 00:14:56.739 --rc geninfo_all_blocks=1 00:14:56.739 --rc geninfo_unexecuted_blocks=1 00:14:56.739 00:14:56.739 ' 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.739 --rc genhtml_branch_coverage=1 00:14:56.739 --rc genhtml_function_coverage=1 00:14:56.739 --rc genhtml_legend=1 00:14:56.739 --rc geninfo_all_blocks=1 00:14:56.739 --rc geninfo_unexecuted_blocks=1 00:14:56.739 00:14:56.739 ' 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.739 --rc genhtml_branch_coverage=1 00:14:56.739 --rc genhtml_function_coverage=1 00:14:56.739 --rc genhtml_legend=1 00:14:56.739 --rc geninfo_all_blocks=1 00:14:56.739 --rc geninfo_unexecuted_blocks=1 00:14:56.739 00:14:56.739 ' 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.739 --rc genhtml_branch_coverage=1 00:14:56.739 --rc genhtml_function_coverage=1 00:14:56.739 --rc genhtml_legend=1 00:14:56.739 --rc geninfo_all_blocks=1 00:14:56.739 --rc geninfo_unexecuted_blocks=1 00:14:56.739 00:14:56.739 ' 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.739 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:14:56.740 18:10:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:58.642 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:58.643 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:58.643 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:58.643 Found net devices under 0000:09:00.0: cvl_0_0 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:58.643 Found net devices under 0000:09:00.1: cvl_0_1 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:58.643 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:58.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:58.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:14:58.912 00:14:58.912 --- 10.0.0.2 ping statistics --- 00:14:58.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.912 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:58.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:58.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:14:58.912 00:14:58.912 --- 10.0.0.1 ping statistics --- 00:14:58.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:58.912 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:58.912 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=544925 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 544925 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 544925 ']' 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:58.913 18:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:59.898 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:59.898 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:14:59.898 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:59.898 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:59.898 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:00.157 18:10:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:12.358 Initializing NVMe Controllers 00:15:12.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:12.358 Initialization complete. Launching workers. 00:15:12.358 ======================================================== 00:15:12.358 Latency(us) 00:15:12.358 Device Information : IOPS MiB/s Average min max 00:15:12.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14224.25 55.56 4498.90 687.68 15890.41 00:15:12.358 ======================================================== 00:15:12.358 Total : 14224.25 55.56 4498.90 687.68 15890.41 00:15:12.358 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:12.358 rmmod nvme_tcp 00:15:12.358 rmmod nvme_fabrics 00:15:12.358 rmmod nvme_keyring 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 544925 ']' 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 544925 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 544925 ']' 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 544925 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 544925 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 544925' 00:15:12.358 killing process with pid 544925 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 544925 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 544925 00:15:12.358 nvmf threads initialize successfully 00:15:12.358 bdev subsystem init successfully 00:15:12.358 created a nvmf target service 00:15:12.358 create targets's poll groups done 00:15:12.358 all subsystems of target started 00:15:12.358 nvmf target is running 00:15:12.358 all subsystems of target stopped 00:15:12.358 destroy targets's poll groups done 00:15:12.358 destroyed the nvmf target service 00:15:12.358 bdev subsystem finish successfully 00:15:12.358 nvmf threads destroy successfully 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.358 18:10:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 00:15:12.927 real 0m16.214s 00:15:12.927 user 0m44.437s 00:15:12.927 sys 0m4.009s 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 ************************************ 00:15:12.927 END TEST nvmf_example 00:15:12.927 ************************************ 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.927 ************************************ 00:15:12.927 START TEST nvmf_filesystem 00:15:12.927 ************************************ 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:12.927 * Looking for test storage... 00:15:12.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:12.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.927 --rc genhtml_branch_coverage=1 00:15:12.927 --rc genhtml_function_coverage=1 00:15:12.927 --rc genhtml_legend=1 00:15:12.927 --rc geninfo_all_blocks=1 00:15:12.927 --rc geninfo_unexecuted_blocks=1 00:15:12.927 00:15:12.927 ' 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:12.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.927 --rc genhtml_branch_coverage=1 00:15:12.927 --rc genhtml_function_coverage=1 00:15:12.927 --rc genhtml_legend=1 00:15:12.927 --rc geninfo_all_blocks=1 00:15:12.927 --rc geninfo_unexecuted_blocks=1 00:15:12.927 00:15:12.927 ' 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:12.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.927 --rc genhtml_branch_coverage=1 00:15:12.927 --rc genhtml_function_coverage=1 00:15:12.927 --rc genhtml_legend=1 00:15:12.927 --rc geninfo_all_blocks=1 00:15:12.927 --rc geninfo_unexecuted_blocks=1 00:15:12.927 00:15:12.927 ' 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:12.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.927 --rc genhtml_branch_coverage=1 00:15:12.927 --rc genhtml_function_coverage=1 00:15:12.927 --rc genhtml_legend=1 00:15:12.927 --rc geninfo_all_blocks=1 00:15:12.927 --rc geninfo_unexecuted_blocks=1 00:15:12.927 00:15:12.927 ' 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:12.927 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:12.928 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:12.929 #define SPDK_CONFIG_H 00:15:12.929 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:12.929 #define SPDK_CONFIG_APPS 1 00:15:12.929 #define SPDK_CONFIG_ARCH native 00:15:12.929 #undef SPDK_CONFIG_ASAN 00:15:12.929 #undef SPDK_CONFIG_AVAHI 00:15:12.929 #undef SPDK_CONFIG_CET 00:15:12.929 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:12.929 #define SPDK_CONFIG_COVERAGE 1 00:15:12.929 #define SPDK_CONFIG_CROSS_PREFIX 00:15:12.929 #undef SPDK_CONFIG_CRYPTO 00:15:12.929 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:12.929 #undef SPDK_CONFIG_CUSTOMOCF 00:15:12.929 #undef SPDK_CONFIG_DAOS 00:15:12.929 #define SPDK_CONFIG_DAOS_DIR 00:15:12.929 #define SPDK_CONFIG_DEBUG 1 00:15:12.929 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:12.929 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:12.929 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:12.929 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:12.929 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:12.929 #undef SPDK_CONFIG_DPDK_UADK 00:15:12.929 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:12.929 #define SPDK_CONFIG_EXAMPLES 1 00:15:12.929 #undef SPDK_CONFIG_FC 00:15:12.929 #define SPDK_CONFIG_FC_PATH 00:15:12.929 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:12.929 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:12.929 #define SPDK_CONFIG_FSDEV 1 00:15:12.929 #undef SPDK_CONFIG_FUSE 00:15:12.929 #undef SPDK_CONFIG_FUZZER 00:15:12.929 #define SPDK_CONFIG_FUZZER_LIB 00:15:12.929 #undef SPDK_CONFIG_GOLANG 00:15:12.929 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:12.929 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:12.929 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:12.929 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:12.929 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:12.929 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:12.929 #undef SPDK_CONFIG_HAVE_LZ4 00:15:12.929 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:12.929 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:12.929 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:12.929 #define SPDK_CONFIG_IDXD 1 00:15:12.929 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:12.929 #undef SPDK_CONFIG_IPSEC_MB 00:15:12.929 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:12.929 #define SPDK_CONFIG_ISAL 1 00:15:12.929 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:12.929 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:12.929 #define SPDK_CONFIG_LIBDIR 00:15:12.929 #undef SPDK_CONFIG_LTO 00:15:12.929 #define SPDK_CONFIG_MAX_LCORES 128 00:15:12.929 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:12.929 #define SPDK_CONFIG_NVME_CUSE 1 00:15:12.929 #undef SPDK_CONFIG_OCF 00:15:12.929 #define SPDK_CONFIG_OCF_PATH 00:15:12.929 #define SPDK_CONFIG_OPENSSL_PATH 00:15:12.929 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:12.929 #define SPDK_CONFIG_PGO_DIR 00:15:12.929 #undef SPDK_CONFIG_PGO_USE 00:15:12.929 #define SPDK_CONFIG_PREFIX /usr/local 00:15:12.929 #undef SPDK_CONFIG_RAID5F 00:15:12.929 #undef SPDK_CONFIG_RBD 00:15:12.929 #define SPDK_CONFIG_RDMA 1 00:15:12.929 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:12.929 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:12.929 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:12.929 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:12.929 #define SPDK_CONFIG_SHARED 1 00:15:12.929 #undef SPDK_CONFIG_SMA 00:15:12.929 #define SPDK_CONFIG_TESTS 1 00:15:12.929 #undef SPDK_CONFIG_TSAN 00:15:12.929 #define SPDK_CONFIG_UBLK 1 00:15:12.929 #define SPDK_CONFIG_UBSAN 1 00:15:12.929 #undef SPDK_CONFIG_UNIT_TESTS 00:15:12.929 #undef SPDK_CONFIG_URING 00:15:12.929 #define SPDK_CONFIG_URING_PATH 00:15:12.929 #undef SPDK_CONFIG_URING_ZNS 00:15:12.929 #undef SPDK_CONFIG_USDT 00:15:12.929 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:12.929 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:12.929 #define SPDK_CONFIG_VFIO_USER 1 00:15:12.929 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:12.929 #define SPDK_CONFIG_VHOST 1 00:15:12.929 #define SPDK_CONFIG_VIRTIO 1 00:15:12.929 #undef SPDK_CONFIG_VTUNE 00:15:12.929 #define SPDK_CONFIG_VTUNE_DIR 00:15:12.929 #define SPDK_CONFIG_WERROR 1 00:15:12.929 #define SPDK_CONFIG_WPDK_DIR 00:15:12.929 #undef SPDK_CONFIG_XNVME 00:15:12.929 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:12.929 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:13.193 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:13.193 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:13.193 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:13.193 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:13.193 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:13.193 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:13.193 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:13.193 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:13.194 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:13.195 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:13.196 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 546746 ]] 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 546746 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Y6mE8x 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Y6mE8x/tests/target /tmp/spdk.Y6mE8x 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50774519808 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11214000128 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375265280 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22441984 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:13.197 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=29919584256 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074675712 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:13.198 * Looking for test storage... 00:15:13.198 18:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=50774519808 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13428592640 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:13.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.198 --rc genhtml_branch_coverage=1 00:15:13.198 --rc genhtml_function_coverage=1 00:15:13.198 --rc genhtml_legend=1 00:15:13.198 --rc geninfo_all_blocks=1 00:15:13.198 --rc geninfo_unexecuted_blocks=1 00:15:13.198 00:15:13.198 ' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:13.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.198 --rc genhtml_branch_coverage=1 00:15:13.198 --rc genhtml_function_coverage=1 00:15:13.198 --rc genhtml_legend=1 00:15:13.198 --rc geninfo_all_blocks=1 00:15:13.198 --rc geninfo_unexecuted_blocks=1 00:15:13.198 00:15:13.198 ' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:13.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.198 --rc genhtml_branch_coverage=1 00:15:13.198 --rc genhtml_function_coverage=1 00:15:13.198 --rc genhtml_legend=1 00:15:13.198 --rc geninfo_all_blocks=1 00:15:13.198 --rc geninfo_unexecuted_blocks=1 00:15:13.198 00:15:13.198 ' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:13.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:13.198 --rc genhtml_branch_coverage=1 00:15:13.198 --rc genhtml_function_coverage=1 00:15:13.198 --rc genhtml_legend=1 00:15:13.198 --rc geninfo_all_blocks=1 00:15:13.198 --rc geninfo_unexecuted_blocks=1 00:15:13.198 00:15:13.198 ' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:13.198 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:13.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:13.199 18:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:15.732 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:15.732 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:15.732 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:15.733 Found net devices under 0000:09:00.0: cvl_0_0 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:15.733 Found net devices under 0000:09:00.1: cvl_0_1 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:15.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:15:15.733 00:15:15.733 --- 10.0.0.2 ping statistics --- 00:15:15.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.733 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:15:15.733 00:15:15.733 --- 10.0.0.1 ping statistics --- 00:15:15.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.733 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:15.733 ************************************ 00:15:15.733 START TEST nvmf_filesystem_no_in_capsule 00:15:15.733 ************************************ 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=548389 00:15:15.733 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:15.734 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 548389 00:15:15.734 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 548389 ']' 00:15:15.734 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.734 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.734 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.734 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.734 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.734 [2024-11-26 18:11:03.645610] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:15:15.734 [2024-11-26 18:11:03.645717] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.734 [2024-11-26 18:11:03.716213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:15.992 [2024-11-26 18:11:03.776707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.992 [2024-11-26 18:11:03.776754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.992 [2024-11-26 18:11:03.776777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.992 [2024-11-26 18:11:03.776788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.992 [2024-11-26 18:11:03.776798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.992 [2024-11-26 18:11:03.778244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.992 [2024-11-26 18:11:03.778357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.992 [2024-11-26 18:11:03.778383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.992 [2024-11-26 18:11:03.778386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:15.992 [2024-11-26 18:11:03.925200] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.992 18:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:16.251 Malloc1 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:16.251 [2024-11-26 18:11:04.114062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.251 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:16.252 { 00:15:16.252 "name": "Malloc1", 00:15:16.252 "aliases": [ 00:15:16.252 "66b42bdf-69fe-4bfc-9309-f2eeaad59dbe" 00:15:16.252 ], 00:15:16.252 "product_name": "Malloc disk", 00:15:16.252 "block_size": 512, 00:15:16.252 "num_blocks": 1048576, 00:15:16.252 "uuid": "66b42bdf-69fe-4bfc-9309-f2eeaad59dbe", 00:15:16.252 "assigned_rate_limits": { 00:15:16.252 "rw_ios_per_sec": 0, 00:15:16.252 "rw_mbytes_per_sec": 0, 00:15:16.252 "r_mbytes_per_sec": 0, 00:15:16.252 "w_mbytes_per_sec": 0 00:15:16.252 }, 00:15:16.252 "claimed": true, 00:15:16.252 "claim_type": "exclusive_write", 00:15:16.252 "zoned": false, 00:15:16.252 "supported_io_types": { 00:15:16.252 "read": true, 00:15:16.252 "write": true, 00:15:16.252 "unmap": true, 00:15:16.252 "flush": true, 00:15:16.252 "reset": true, 00:15:16.252 "nvme_admin": false, 00:15:16.252 "nvme_io": false, 00:15:16.252 "nvme_io_md": false, 00:15:16.252 "write_zeroes": true, 00:15:16.252 "zcopy": true, 00:15:16.252 "get_zone_info": false, 00:15:16.252 "zone_management": false, 00:15:16.252 "zone_append": false, 00:15:16.252 "compare": false, 00:15:16.252 "compare_and_write": false, 00:15:16.252 "abort": true, 00:15:16.252 "seek_hole": false, 00:15:16.252 "seek_data": false, 00:15:16.252 "copy": true, 00:15:16.252 "nvme_iov_md": false 00:15:16.252 }, 00:15:16.252 "memory_domains": [ 00:15:16.252 { 00:15:16.252 "dma_device_id": "system", 00:15:16.252 "dma_device_type": 1 00:15:16.252 }, 00:15:16.252 { 00:15:16.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.252 "dma_device_type": 2 00:15:16.252 } 00:15:16.252 ], 00:15:16.252 "driver_specific": {} 00:15:16.252 } 00:15:16.252 ]' 00:15:16.252 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:16.252 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:16.252 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:16.252 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:16.252 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:16.252 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:16.252 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:16.252 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.185 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:17.185 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:17.185 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.185 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:17.185 18:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:19.084 18:11:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:19.084 18:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:19.342 18:11:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:20.715 ************************************ 00:15:20.715 START TEST filesystem_ext4 00:15:20.715 ************************************ 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:20.715 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:20.715 mke2fs 1.47.0 (5-Feb-2023) 00:15:20.715 Discarding device blocks: 0/522240 done 00:15:20.715 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:20.715 Filesystem UUID: 5c4486c3-0a88-4e64-bea2-a6236f17827f 00:15:20.715 Superblock backups stored on blocks: 00:15:20.715 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:20.715 00:15:20.715 Allocating group tables: 0/64 done 00:15:20.715 Writing inode tables: 0/64 done 00:15:21.647 Creating journal (8192 blocks): done 00:15:21.647 Writing superblocks and filesystem accounting information: 0/64 done 00:15:21.647 00:15:21.647 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:21.647 18:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 548389 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:28.201 00:15:28.201 real 0m6.769s 00:15:28.201 user 0m0.019s 00:15:28.201 sys 0m0.058s 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:28.201 ************************************ 00:15:28.201 END TEST filesystem_ext4 00:15:28.201 ************************************ 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.201 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.201 ************************************ 00:15:28.201 START TEST filesystem_btrfs 00:15:28.201 ************************************ 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:28.202 btrfs-progs v6.8.1 00:15:28.202 See https://btrfs.readthedocs.io for more information. 00:15:28.202 00:15:28.202 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:28.202 NOTE: several default settings have changed in version 5.15, please make sure 00:15:28.202 this does not affect your deployments: 00:15:28.202 - DUP for metadata (-m dup) 00:15:28.202 - enabled no-holes (-O no-holes) 00:15:28.202 - enabled free-space-tree (-R free-space-tree) 00:15:28.202 00:15:28.202 Label: (null) 00:15:28.202 UUID: 6be210af-2e9d-493d-b1b8-31b073b0aa06 00:15:28.202 Node size: 16384 00:15:28.202 Sector size: 4096 (CPU page size: 4096) 00:15:28.202 Filesystem size: 510.00MiB 00:15:28.202 Block group profiles: 00:15:28.202 Data: single 8.00MiB 00:15:28.202 Metadata: DUP 32.00MiB 00:15:28.202 System: DUP 8.00MiB 00:15:28.202 SSD detected: yes 00:15:28.202 Zoned device: no 00:15:28.202 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:28.202 Checksum: crc32c 00:15:28.202 Number of devices: 1 00:15:28.202 Devices: 00:15:28.202 ID SIZE PATH 00:15:28.202 1 510.00MiB /dev/nvme0n1p1 00:15:28.202 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 548389 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:28.202 00:15:28.202 real 0m0.706s 00:15:28.202 user 0m0.026s 00:15:28.202 sys 0m0.095s 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 ************************************ 00:15:28.202 END TEST filesystem_btrfs 00:15:28.202 ************************************ 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:28.202 ************************************ 00:15:28.202 START TEST filesystem_xfs 00:15:28.202 ************************************ 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:15:28.202 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:28.202 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:28.202 = sectsz=512 attr=2, projid32bit=1 00:15:28.202 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:28.202 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:28.202 data = bsize=4096 blocks=130560, imaxpct=25 00:15:28.202 = sunit=0 swidth=0 blks 00:15:28.202 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:28.202 log =internal log bsize=4096 blocks=16384, version=2 00:15:28.202 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:28.202 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:29.136 Discarding blocks...Done. 00:15:29.136 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:15:29.136 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 548389 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:31.662 00:15:31.662 real 0m3.664s 00:15:31.662 user 0m0.017s 00:15:31.662 sys 0m0.068s 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:31.662 ************************************ 00:15:31.662 END TEST filesystem_xfs 00:15:31.662 ************************************ 00:15:31.662 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:31.920 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:31.920 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.178 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 548389 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 548389 ']' 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 548389 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 548389 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 548389' 00:15:32.178 killing process with pid 548389 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 548389 00:15:32.178 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 548389 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:32.745 00:15:32.745 real 0m16.875s 00:15:32.745 user 1m5.381s 00:15:32.745 sys 0m2.059s 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:32.745 ************************************ 00:15:32.745 END TEST nvmf_filesystem_no_in_capsule 00:15:32.745 ************************************ 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:32.745 ************************************ 00:15:32.745 START TEST nvmf_filesystem_in_capsule 00:15:32.745 ************************************ 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=550618 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 550618 00:15:32.745 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 550618 ']' 00:15:32.746 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.746 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.746 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.746 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.746 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:32.746 [2024-11-26 18:11:20.580914] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:15:32.746 [2024-11-26 18:11:20.580993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.746 [2024-11-26 18:11:20.659262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.746 [2024-11-26 18:11:20.723379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.746 [2024-11-26 18:11:20.723433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.746 [2024-11-26 18:11:20.723447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.746 [2024-11-26 18:11:20.723458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.746 [2024-11-26 18:11:20.723469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.746 [2024-11-26 18:11:20.725078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.746 [2024-11-26 18:11:20.725143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.746 [2024-11-26 18:11:20.725211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.746 [2024-11-26 18:11:20.725215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.004 [2024-11-26 18:11:20.887615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.004 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 Malloc1 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 [2024-11-26 18:11:21.094923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.264 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:33.264 { 00:15:33.264 "name": "Malloc1", 00:15:33.264 "aliases": [ 00:15:33.264 "bdb93fd1-0614-4429-a436-b59ab2b7ef78" 00:15:33.264 ], 00:15:33.264 "product_name": "Malloc disk", 00:15:33.264 "block_size": 512, 00:15:33.264 "num_blocks": 1048576, 00:15:33.264 "uuid": "bdb93fd1-0614-4429-a436-b59ab2b7ef78", 00:15:33.264 "assigned_rate_limits": { 00:15:33.264 "rw_ios_per_sec": 0, 00:15:33.264 "rw_mbytes_per_sec": 0, 00:15:33.265 "r_mbytes_per_sec": 0, 00:15:33.265 "w_mbytes_per_sec": 0 00:15:33.265 }, 00:15:33.265 "claimed": true, 00:15:33.265 "claim_type": "exclusive_write", 00:15:33.265 "zoned": false, 00:15:33.265 "supported_io_types": { 00:15:33.265 "read": true, 00:15:33.265 "write": true, 00:15:33.265 "unmap": true, 00:15:33.265 "flush": true, 00:15:33.265 "reset": true, 00:15:33.265 "nvme_admin": false, 00:15:33.265 "nvme_io": false, 00:15:33.265 "nvme_io_md": false, 00:15:33.265 "write_zeroes": true, 00:15:33.265 "zcopy": true, 00:15:33.265 "get_zone_info": false, 00:15:33.265 "zone_management": false, 00:15:33.265 "zone_append": false, 00:15:33.265 "compare": false, 00:15:33.265 "compare_and_write": false, 00:15:33.265 "abort": true, 00:15:33.265 "seek_hole": false, 00:15:33.265 "seek_data": false, 00:15:33.265 "copy": true, 00:15:33.265 "nvme_iov_md": false 00:15:33.265 }, 00:15:33.265 "memory_domains": [ 00:15:33.265 { 00:15:33.265 "dma_device_id": "system", 00:15:33.265 "dma_device_type": 1 00:15:33.265 }, 00:15:33.265 { 00:15:33.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.265 "dma_device_type": 2 00:15:33.265 } 00:15:33.265 ], 00:15:33.265 "driver_specific": {} 00:15:33.265 } 00:15:33.265 ]' 00:15:33.265 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:33.265 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:33.265 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:33.265 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:33.265 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:33.265 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:33.265 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:33.265 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:33.831 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:33.831 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:33.831 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:33.831 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:33.831 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:36.360 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:36.360 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:36.926 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:37.860 ************************************ 00:15:37.860 START TEST filesystem_in_capsule_ext4 00:15:37.860 ************************************ 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:15:37.860 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:37.860 mke2fs 1.47.0 (5-Feb-2023) 00:15:37.860 Discarding device blocks: 0/522240 done 00:15:38.117 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:38.118 Filesystem UUID: ff97d8d8-f48f-48b3-ba7a-e586d4e833c9 00:15:38.118 Superblock backups stored on blocks: 00:15:38.118 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:38.118 00:15:38.118 Allocating group tables: 0/64 done 00:15:38.118 Writing inode tables: 0/64 done 00:15:40.671 Creating journal (8192 blocks): done 00:15:40.999 Writing superblocks and filesystem accounting information: 0/64 done 00:15:40.999 00:15:40.999 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:15:40.999 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:46.258 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:46.258 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 550618 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:46.259 00:15:46.259 real 0m8.386s 00:15:46.259 user 0m0.020s 00:15:46.259 sys 0m0.070s 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:15:46.259 ************************************ 00:15:46.259 END TEST filesystem_in_capsule_ext4 00:15:46.259 ************************************ 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:46.259 ************************************ 00:15:46.259 START TEST filesystem_in_capsule_btrfs 00:15:46.259 ************************************ 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:15:46.259 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:46.517 btrfs-progs v6.8.1 00:15:46.517 See https://btrfs.readthedocs.io for more information. 00:15:46.517 00:15:46.517 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:46.517 NOTE: several default settings have changed in version 5.15, please make sure 00:15:46.517 this does not affect your deployments: 00:15:46.517 - DUP for metadata (-m dup) 00:15:46.517 - enabled no-holes (-O no-holes) 00:15:46.517 - enabled free-space-tree (-R free-space-tree) 00:15:46.517 00:15:46.517 Label: (null) 00:15:46.517 UUID: 58fc1b41-a605-4f8e-94f9-fe8f7dc64a0e 00:15:46.517 Node size: 16384 00:15:46.518 Sector size: 4096 (CPU page size: 4096) 00:15:46.518 Filesystem size: 510.00MiB 00:15:46.518 Block group profiles: 00:15:46.518 Data: single 8.00MiB 00:15:46.518 Metadata: DUP 32.00MiB 00:15:46.518 System: DUP 8.00MiB 00:15:46.518 SSD detected: yes 00:15:46.518 Zoned device: no 00:15:46.518 Features: extref, skinny-metadata, no-holes, free-space-tree 00:15:46.518 Checksum: crc32c 00:15:46.518 Number of devices: 1 00:15:46.518 Devices: 00:15:46.518 ID SIZE PATH 00:15:46.518 1 510.00MiB /dev/nvme0n1p1 00:15:46.518 00:15:46.518 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:15:46.518 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 550618 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:47.450 00:15:47.450 real 0m1.063s 00:15:47.450 user 0m0.025s 00:15:47.450 sys 0m0.089s 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:15:47.450 ************************************ 00:15:47.450 END TEST filesystem_in_capsule_btrfs 00:15:47.450 ************************************ 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:47.450 ************************************ 00:15:47.450 START TEST filesystem_in_capsule_xfs 00:15:47.450 ************************************ 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:15:47.450 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:47.451 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:47.451 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:15:47.451 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:15:47.451 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:15:47.451 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:15:47.451 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:15:47.451 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:15:47.451 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:47.451 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:47.451 = sectsz=512 attr=2, projid32bit=1 00:15:47.451 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:47.451 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:47.451 data = bsize=4096 blocks=130560, imaxpct=25 00:15:47.451 = sunit=0 swidth=0 blks 00:15:47.451 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:47.451 log =internal log bsize=4096 blocks=16384, version=2 00:15:47.451 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:47.451 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:48.383 Discarding blocks...Done. 00:15:48.383 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:15:48.383 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 550618 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:50.279 00:15:50.279 real 0m2.636s 00:15:50.279 user 0m0.016s 00:15:50.279 sys 0m0.065s 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:15:50.279 ************************************ 00:15:50.279 END TEST filesystem_in_capsule_xfs 00:15:50.279 ************************************ 00:15:50.279 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:50.279 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:15:50.279 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 550618 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 550618 ']' 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 550618 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 550618 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 550618' 00:15:50.537 killing process with pid 550618 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 550618 00:15:50.537 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 550618 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:15:51.103 00:15:51.103 real 0m18.310s 00:15:51.103 user 1m10.949s 00:15:51.103 sys 0m2.179s 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:51.103 ************************************ 00:15:51.103 END TEST nvmf_filesystem_in_capsule 00:15:51.103 ************************************ 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.103 rmmod nvme_tcp 00:15:51.103 rmmod nvme_fabrics 00:15:51.103 rmmod nvme_keyring 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.103 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.011 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:53.011 00:15:53.011 real 0m40.234s 00:15:53.011 user 2m17.499s 00:15:53.011 sys 0m6.150s 00:15:53.011 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.011 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:53.011 ************************************ 00:15:53.011 END TEST nvmf_filesystem 00:15:53.011 ************************************ 00:15:53.011 18:11:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:53.011 18:11:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.011 18:11:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.011 18:11:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.011 ************************************ 00:15:53.011 START TEST nvmf_target_discovery 00:15:53.011 ************************************ 00:15:53.011 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:53.270 * Looking for test storage... 00:15:53.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.270 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:53.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.271 --rc genhtml_branch_coverage=1 00:15:53.271 --rc genhtml_function_coverage=1 00:15:53.271 --rc genhtml_legend=1 00:15:53.271 --rc geninfo_all_blocks=1 00:15:53.271 --rc geninfo_unexecuted_blocks=1 00:15:53.271 00:15:53.271 ' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:53.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.271 --rc genhtml_branch_coverage=1 00:15:53.271 --rc genhtml_function_coverage=1 00:15:53.271 --rc genhtml_legend=1 00:15:53.271 --rc geninfo_all_blocks=1 00:15:53.271 --rc geninfo_unexecuted_blocks=1 00:15:53.271 00:15:53.271 ' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:53.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.271 --rc genhtml_branch_coverage=1 00:15:53.271 --rc genhtml_function_coverage=1 00:15:53.271 --rc genhtml_legend=1 00:15:53.271 --rc geninfo_all_blocks=1 00:15:53.271 --rc geninfo_unexecuted_blocks=1 00:15:53.271 00:15:53.271 ' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:53.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.271 --rc genhtml_branch_coverage=1 00:15:53.271 --rc genhtml_function_coverage=1 00:15:53.271 --rc genhtml_legend=1 00:15:53.271 --rc geninfo_all_blocks=1 00:15:53.271 --rc geninfo_unexecuted_blocks=1 00:15:53.271 00:15:53.271 ' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:53.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.271 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:15:53.272 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:55.803 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:55.803 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:55.803 Found net devices under 0000:09:00.0: cvl_0_0 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.803 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:55.804 Found net devices under 0000:09:00.1: cvl_0_1 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:55.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:15:55.804 00:15:55.804 --- 10.0.0.2 ping statistics --- 00:15:55.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.804 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:15:55.804 00:15:55.804 --- 10.0.0.1 ping statistics --- 00:15:55.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.804 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=554917 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 554917 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 554917 ']' 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.804 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.804 [2024-11-26 18:11:43.570327] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:15:55.804 [2024-11-26 18:11:43.570441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.804 [2024-11-26 18:11:43.643244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.804 [2024-11-26 18:11:43.699445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.804 [2024-11-26 18:11:43.699500] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.804 [2024-11-26 18:11:43.699523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.804 [2024-11-26 18:11:43.699533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.804 [2024-11-26 18:11:43.699542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.804 [2024-11-26 18:11:43.701182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.804 [2024-11-26 18:11:43.701257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.804 [2024-11-26 18:11:43.701324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.804 [2024-11-26 18:11:43.701328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.063 [2024-11-26 18:11:43.850813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.063 Null1 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.063 [2024-11-26 18:11:43.902502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.063 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 Null2 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 Null3 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 Null4 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.064 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:15:56.323 00:15:56.323 Discovery Log Number of Records 6, Generation counter 6 00:15:56.323 =====Discovery Log Entry 0====== 00:15:56.323 trtype: tcp 00:15:56.323 adrfam: ipv4 00:15:56.323 subtype: current discovery subsystem 00:15:56.323 treq: not required 00:15:56.323 portid: 0 00:15:56.323 trsvcid: 4420 00:15:56.323 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:56.323 traddr: 10.0.0.2 00:15:56.323 eflags: explicit discovery connections, duplicate discovery information 00:15:56.323 sectype: none 00:15:56.323 =====Discovery Log Entry 1====== 00:15:56.323 trtype: tcp 00:15:56.323 adrfam: ipv4 00:15:56.323 subtype: nvme subsystem 00:15:56.323 treq: not required 00:15:56.323 portid: 0 00:15:56.323 trsvcid: 4420 00:15:56.323 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:56.323 traddr: 10.0.0.2 00:15:56.323 eflags: none 00:15:56.323 sectype: none 00:15:56.323 =====Discovery Log Entry 2====== 00:15:56.323 trtype: tcp 00:15:56.323 adrfam: ipv4 00:15:56.323 subtype: nvme subsystem 00:15:56.323 treq: not required 00:15:56.323 portid: 0 00:15:56.323 trsvcid: 4420 00:15:56.323 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:56.323 traddr: 10.0.0.2 00:15:56.323 eflags: none 00:15:56.323 sectype: none 00:15:56.323 =====Discovery Log Entry 3====== 00:15:56.323 trtype: tcp 00:15:56.323 adrfam: ipv4 00:15:56.323 subtype: nvme subsystem 00:15:56.323 treq: not required 00:15:56.323 portid: 0 00:15:56.323 trsvcid: 4420 00:15:56.323 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:56.323 traddr: 10.0.0.2 00:15:56.323 eflags: none 00:15:56.323 sectype: none 00:15:56.323 =====Discovery Log Entry 4====== 00:15:56.323 trtype: tcp 00:15:56.323 adrfam: ipv4 00:15:56.323 subtype: nvme subsystem 00:15:56.323 treq: not required 00:15:56.323 portid: 0 00:15:56.323 trsvcid: 4420 00:15:56.323 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:56.323 traddr: 10.0.0.2 00:15:56.323 eflags: none 00:15:56.323 sectype: none 00:15:56.323 =====Discovery Log Entry 5====== 00:15:56.323 trtype: tcp 00:15:56.323 adrfam: ipv4 00:15:56.323 subtype: discovery subsystem referral 00:15:56.323 treq: not required 00:15:56.323 portid: 0 00:15:56.323 trsvcid: 4430 00:15:56.323 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:56.323 traddr: 10.0.0.2 00:15:56.323 eflags: none 00:15:56.323 sectype: none 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:56.323 Perform nvmf subsystem discovery via RPC 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.323 [ 00:15:56.323 { 00:15:56.323 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:56.323 "subtype": "Discovery", 00:15:56.323 "listen_addresses": [ 00:15:56.323 { 00:15:56.323 "trtype": "TCP", 00:15:56.323 "adrfam": "IPv4", 00:15:56.323 "traddr": "10.0.0.2", 00:15:56.323 "trsvcid": "4420" 00:15:56.323 } 00:15:56.323 ], 00:15:56.323 "allow_any_host": true, 00:15:56.323 "hosts": [] 00:15:56.323 }, 00:15:56.323 { 00:15:56.323 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.323 "subtype": "NVMe", 00:15:56.323 "listen_addresses": [ 00:15:56.323 { 00:15:56.323 "trtype": "TCP", 00:15:56.323 "adrfam": "IPv4", 00:15:56.323 "traddr": "10.0.0.2", 00:15:56.323 "trsvcid": "4420" 00:15:56.323 } 00:15:56.323 ], 00:15:56.323 "allow_any_host": true, 00:15:56.323 "hosts": [], 00:15:56.323 "serial_number": "SPDK00000000000001", 00:15:56.323 "model_number": "SPDK bdev Controller", 00:15:56.323 "max_namespaces": 32, 00:15:56.323 "min_cntlid": 1, 00:15:56.323 "max_cntlid": 65519, 00:15:56.323 "namespaces": [ 00:15:56.323 { 00:15:56.323 "nsid": 1, 00:15:56.323 "bdev_name": "Null1", 00:15:56.323 "name": "Null1", 00:15:56.323 "nguid": "ABCA220209FC4B12B2F89B94661492D4", 00:15:56.323 "uuid": "abca2202-09fc-4b12-b2f8-9b94661492d4" 00:15:56.323 } 00:15:56.323 ] 00:15:56.323 }, 00:15:56.323 { 00:15:56.323 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:56.323 "subtype": "NVMe", 00:15:56.323 "listen_addresses": [ 00:15:56.323 { 00:15:56.323 "trtype": "TCP", 00:15:56.323 "adrfam": "IPv4", 00:15:56.323 "traddr": "10.0.0.2", 00:15:56.323 "trsvcid": "4420" 00:15:56.323 } 00:15:56.323 ], 00:15:56.323 "allow_any_host": true, 00:15:56.323 "hosts": [], 00:15:56.323 "serial_number": "SPDK00000000000002", 00:15:56.323 "model_number": "SPDK bdev Controller", 00:15:56.323 "max_namespaces": 32, 00:15:56.323 "min_cntlid": 1, 00:15:56.323 "max_cntlid": 65519, 00:15:56.323 "namespaces": [ 00:15:56.323 { 00:15:56.323 "nsid": 1, 00:15:56.323 "bdev_name": "Null2", 00:15:56.323 "name": "Null2", 00:15:56.323 "nguid": "590E95963E1348CCAB9A2CAA31F6CF98", 00:15:56.323 "uuid": "590e9596-3e13-48cc-ab9a-2caa31f6cf98" 00:15:56.323 } 00:15:56.323 ] 00:15:56.323 }, 00:15:56.323 { 00:15:56.323 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:56.323 "subtype": "NVMe", 00:15:56.323 "listen_addresses": [ 00:15:56.323 { 00:15:56.323 "trtype": "TCP", 00:15:56.323 "adrfam": "IPv4", 00:15:56.323 "traddr": "10.0.0.2", 00:15:56.323 "trsvcid": "4420" 00:15:56.323 } 00:15:56.323 ], 00:15:56.323 "allow_any_host": true, 00:15:56.323 "hosts": [], 00:15:56.323 "serial_number": "SPDK00000000000003", 00:15:56.323 "model_number": "SPDK bdev Controller", 00:15:56.323 "max_namespaces": 32, 00:15:56.323 "min_cntlid": 1, 00:15:56.323 "max_cntlid": 65519, 00:15:56.323 "namespaces": [ 00:15:56.323 { 00:15:56.323 "nsid": 1, 00:15:56.323 "bdev_name": "Null3", 00:15:56.323 "name": "Null3", 00:15:56.323 "nguid": "77AFCBBC6DC44DAC828736596D50582F", 00:15:56.323 "uuid": "77afcbbc-6dc4-4dac-8287-36596d50582f" 00:15:56.323 } 00:15:56.323 ] 00:15:56.323 }, 00:15:56.323 { 00:15:56.323 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:56.323 "subtype": "NVMe", 00:15:56.323 "listen_addresses": [ 00:15:56.323 { 00:15:56.323 "trtype": "TCP", 00:15:56.323 "adrfam": "IPv4", 00:15:56.323 "traddr": "10.0.0.2", 00:15:56.323 "trsvcid": "4420" 00:15:56.323 } 00:15:56.323 ], 00:15:56.323 "allow_any_host": true, 00:15:56.323 "hosts": [], 00:15:56.323 "serial_number": "SPDK00000000000004", 00:15:56.323 "model_number": "SPDK bdev Controller", 00:15:56.323 "max_namespaces": 32, 00:15:56.323 "min_cntlid": 1, 00:15:56.323 "max_cntlid": 65519, 00:15:56.323 "namespaces": [ 00:15:56.323 { 00:15:56.323 "nsid": 1, 00:15:56.323 "bdev_name": "Null4", 00:15:56.323 "name": "Null4", 00:15:56.323 "nguid": "F2CA2262DAF34F3C81F4A68306F16547", 00:15:56.323 "uuid": "f2ca2262-daf3-4f3c-81f4-a68306f16547" 00:15:56.323 } 00:15:56.323 ] 00:15:56.323 } 00:15:56.323 ] 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.323 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:56.324 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:56.324 rmmod nvme_tcp 00:15:56.582 rmmod nvme_fabrics 00:15:56.582 rmmod nvme_keyring 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 554917 ']' 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 554917 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 554917 ']' 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 554917 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 554917 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 554917' 00:15:56.582 killing process with pid 554917 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 554917 00:15:56.582 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 554917 00:15:56.840 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:56.840 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:56.840 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:56.840 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:15:56.841 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:56.841 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:56.841 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:56.841 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:56.841 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:56.841 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.841 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.841 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.747 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:58.747 00:15:58.747 real 0m5.691s 00:15:58.747 user 0m4.702s 00:15:58.747 sys 0m2.013s 00:15:58.747 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.747 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.747 ************************************ 00:15:58.747 END TEST nvmf_target_discovery 00:15:58.747 ************************************ 00:15:58.747 18:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:58.747 18:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:58.747 18:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.747 18:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:58.747 ************************************ 00:15:58.747 START TEST nvmf_referrals 00:15:58.747 ************************************ 00:15:58.747 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:59.007 * Looking for test storage... 00:15:59.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:15:59.007 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:59.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.008 --rc genhtml_branch_coverage=1 00:15:59.008 --rc genhtml_function_coverage=1 00:15:59.008 --rc genhtml_legend=1 00:15:59.008 --rc geninfo_all_blocks=1 00:15:59.008 --rc geninfo_unexecuted_blocks=1 00:15:59.008 00:15:59.008 ' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:59.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.008 --rc genhtml_branch_coverage=1 00:15:59.008 --rc genhtml_function_coverage=1 00:15:59.008 --rc genhtml_legend=1 00:15:59.008 --rc geninfo_all_blocks=1 00:15:59.008 --rc geninfo_unexecuted_blocks=1 00:15:59.008 00:15:59.008 ' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:59.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.008 --rc genhtml_branch_coverage=1 00:15:59.008 --rc genhtml_function_coverage=1 00:15:59.008 --rc genhtml_legend=1 00:15:59.008 --rc geninfo_all_blocks=1 00:15:59.008 --rc geninfo_unexecuted_blocks=1 00:15:59.008 00:15:59.008 ' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:59.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.008 --rc genhtml_branch_coverage=1 00:15:59.008 --rc genhtml_function_coverage=1 00:15:59.008 --rc genhtml_legend=1 00:15:59.008 --rc geninfo_all_blocks=1 00:15:59.008 --rc geninfo_unexecuted_blocks=1 00:15:59.008 00:15:59.008 ' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:59.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.008 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:59.009 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:59.009 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:15:59.009 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.542 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.542 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:01.542 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:01.542 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:01.542 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:01.542 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:01.542 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:01.543 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:01.543 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:01.543 Found net devices under 0000:09:00.0: cvl_0_0 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:01.543 Found net devices under 0000:09:00.1: cvl_0_1 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.543 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:01.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:16:01.544 00:16:01.544 --- 10.0.0.2 ping statistics --- 00:16:01.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.544 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:16:01.544 00:16:01.544 --- 10.0.0.1 ping statistics --- 00:16:01.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.544 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=557014 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 557014 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 557014 ']' 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.544 [2024-11-26 18:11:49.266840] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:16:01.544 [2024-11-26 18:11:49.266916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.544 [2024-11-26 18:11:49.340089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.544 [2024-11-26 18:11:49.400529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.544 [2024-11-26 18:11:49.400598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.544 [2024-11-26 18:11:49.400612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.544 [2024-11-26 18:11:49.400623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.544 [2024-11-26 18:11:49.400633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.544 [2024-11-26 18:11:49.402298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.544 [2024-11-26 18:11:49.402365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.544 [2024-11-26 18:11:49.402431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.544 [2024-11-26 18:11:49.402435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:01.544 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 [2024-11-26 18:11:49.564871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 [2024-11-26 18:11:49.585500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:01.801 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:02.059 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:02.363 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:02.363 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:02.363 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:02.363 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.363 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:02.364 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:02.621 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:02.621 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:02.621 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:02.621 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:02.621 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:02.621 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:02.878 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:02.879 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:03.136 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:03.136 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:03.136 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:03.136 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:03.136 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:03.136 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:03.393 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:03.394 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.394 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:03.394 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:03.394 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:03.394 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:03.394 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:03.394 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:03.394 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.651 rmmod nvme_tcp 00:16:03.651 rmmod nvme_fabrics 00:16:03.651 rmmod nvme_keyring 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 557014 ']' 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 557014 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 557014 ']' 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 557014 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 557014 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 557014' 00:16:03.651 killing process with pid 557014 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 557014 00:16:03.651 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 557014 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.909 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.814 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:05.814 00:16:05.814 real 0m7.062s 00:16:05.814 user 0m10.884s 00:16:05.814 sys 0m2.382s 00:16:05.814 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.814 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:05.814 ************************************ 00:16:05.814 END TEST nvmf_referrals 00:16:05.814 ************************************ 00:16:06.073 18:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:06.073 18:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:06.073 18:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.073 18:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.073 ************************************ 00:16:06.073 START TEST nvmf_connect_disconnect 00:16:06.073 ************************************ 00:16:06.073 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:16:06.073 * Looking for test storage... 00:16:06.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.073 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:06.073 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:16:06.073 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:06.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.073 --rc genhtml_branch_coverage=1 00:16:06.073 --rc genhtml_function_coverage=1 00:16:06.073 --rc genhtml_legend=1 00:16:06.073 --rc geninfo_all_blocks=1 00:16:06.073 --rc geninfo_unexecuted_blocks=1 00:16:06.073 00:16:06.073 ' 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:06.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.073 --rc genhtml_branch_coverage=1 00:16:06.073 --rc genhtml_function_coverage=1 00:16:06.073 --rc genhtml_legend=1 00:16:06.073 --rc geninfo_all_blocks=1 00:16:06.073 --rc geninfo_unexecuted_blocks=1 00:16:06.073 00:16:06.073 ' 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:06.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.073 --rc genhtml_branch_coverage=1 00:16:06.073 --rc genhtml_function_coverage=1 00:16:06.073 --rc genhtml_legend=1 00:16:06.073 --rc geninfo_all_blocks=1 00:16:06.073 --rc geninfo_unexecuted_blocks=1 00:16:06.073 00:16:06.073 ' 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:06.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.073 --rc genhtml_branch_coverage=1 00:16:06.073 --rc genhtml_function_coverage=1 00:16:06.073 --rc genhtml_legend=1 00:16:06.073 --rc geninfo_all_blocks=1 00:16:06.073 --rc geninfo_unexecuted_blocks=1 00:16:06.073 00:16:06.073 ' 00:16:06.073 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:16:06.074 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:08.609 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:08.609 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:08.609 Found net devices under 0000:09:00.0: cvl_0_0 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.609 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:08.610 Found net devices under 0000:09:00.1: cvl_0_1 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:08.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:16:08.610 00:16:08.610 --- 10.0.0.2 ping statistics --- 00:16:08.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.610 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:16:08.610 00:16:08.610 --- 10.0.0.1 ping statistics --- 00:16:08.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.610 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=559314 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 559314 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 559314 ']' 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.610 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.610 [2024-11-26 18:11:56.327384] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:16:08.610 [2024-11-26 18:11:56.327496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.611 [2024-11-26 18:11:56.397704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.611 [2024-11-26 18:11:56.458243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.611 [2024-11-26 18:11:56.458332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.611 [2024-11-26 18:11:56.458350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.611 [2024-11-26 18:11:56.458362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.611 [2024-11-26 18:11:56.458373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.611 [2024-11-26 18:11:56.460045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.611 [2024-11-26 18:11:56.460119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.611 [2024-11-26 18:11:56.460193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.611 [2024-11-26 18:11:56.460196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.611 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.611 [2024-11-26 18:11:56.614179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:08.869 [2024-11-26 18:11:56.685936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:16:08.869 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:16:12.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.321 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:22.321 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:22.321 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.321 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:22.321 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.321 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:22.321 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.321 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.321 rmmod nvme_tcp 00:16:22.321 rmmod nvme_fabrics 00:16:22.579 rmmod nvme_keyring 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 559314 ']' 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 559314 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 559314 ']' 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 559314 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 559314 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 559314' 00:16:22.579 killing process with pid 559314 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 559314 00:16:22.579 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 559314 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:22.839 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.744 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:24.744 00:16:24.744 real 0m18.849s 00:16:24.744 user 0m56.372s 00:16:24.744 sys 0m3.441s 00:16:24.744 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.744 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:24.744 ************************************ 00:16:24.744 END TEST nvmf_connect_disconnect 00:16:24.744 ************************************ 00:16:24.744 18:12:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:24.744 18:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:24.744 18:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.744 18:12:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.002 ************************************ 00:16:25.002 START TEST nvmf_multitarget 00:16:25.002 ************************************ 00:16:25.002 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:25.002 * Looking for test storage... 00:16:25.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.002 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:25.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.003 --rc genhtml_branch_coverage=1 00:16:25.003 --rc genhtml_function_coverage=1 00:16:25.003 --rc genhtml_legend=1 00:16:25.003 --rc geninfo_all_blocks=1 00:16:25.003 --rc geninfo_unexecuted_blocks=1 00:16:25.003 00:16:25.003 ' 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:25.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.003 --rc genhtml_branch_coverage=1 00:16:25.003 --rc genhtml_function_coverage=1 00:16:25.003 --rc genhtml_legend=1 00:16:25.003 --rc geninfo_all_blocks=1 00:16:25.003 --rc geninfo_unexecuted_blocks=1 00:16:25.003 00:16:25.003 ' 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:25.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.003 --rc genhtml_branch_coverage=1 00:16:25.003 --rc genhtml_function_coverage=1 00:16:25.003 --rc genhtml_legend=1 00:16:25.003 --rc geninfo_all_blocks=1 00:16:25.003 --rc geninfo_unexecuted_blocks=1 00:16:25.003 00:16:25.003 ' 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:25.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.003 --rc genhtml_branch_coverage=1 00:16:25.003 --rc genhtml_function_coverage=1 00:16:25.003 --rc genhtml_legend=1 00:16:25.003 --rc geninfo_all_blocks=1 00:16:25.003 --rc geninfo_unexecuted_blocks=1 00:16:25.003 00:16:25.003 ' 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.003 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:25.004 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:27.536 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:27.536 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:27.536 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:27.536 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:27.537 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:27.537 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:27.537 Found net devices under 0000:09:00.0: cvl_0_0 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:27.537 Found net devices under 0000:09:00.1: cvl_0_1 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:27.537 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:27.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:16:27.538 00:16:27.538 --- 10.0.0.2 ping statistics --- 00:16:27.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.538 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:27.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:16:27.538 00:16:27.538 --- 10.0.0.1 ping statistics --- 00:16:27.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.538 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=563700 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 563700 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 563700 ']' 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.538 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:27.538 [2024-11-26 18:12:15.330994] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:16:27.538 [2024-11-26 18:12:15.331084] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.538 [2024-11-26 18:12:15.408876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.538 [2024-11-26 18:12:15.469613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.538 [2024-11-26 18:12:15.469674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.538 [2024-11-26 18:12:15.469702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.538 [2024-11-26 18:12:15.469713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.538 [2024-11-26 18:12:15.469723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.538 [2024-11-26 18:12:15.471446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.538 [2024-11-26 18:12:15.471472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.538 [2024-11-26 18:12:15.471533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.538 [2024-11-26 18:12:15.471537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:27.796 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:28.054 "nvmf_tgt_1" 00:16:28.054 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:28.054 "nvmf_tgt_2" 00:16:28.054 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:28.054 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:28.312 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:28.312 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:28.312 true 00:16:28.312 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:28.570 true 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:28.570 rmmod nvme_tcp 00:16:28.570 rmmod nvme_fabrics 00:16:28.570 rmmod nvme_keyring 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 563700 ']' 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 563700 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 563700 ']' 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 563700 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.570 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 563700 00:16:28.827 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:28.827 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 563700' 00:16:28.828 killing process with pid 563700 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 563700 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 563700 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:28.828 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:29.085 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.086 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:29.086 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.086 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.086 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:30.998 00:16:30.998 real 0m6.113s 00:16:30.998 user 0m7.148s 00:16:30.998 sys 0m2.154s 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:30.998 ************************************ 00:16:30.998 END TEST nvmf_multitarget 00:16:30.998 ************************************ 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.998 ************************************ 00:16:30.998 START TEST nvmf_rpc 00:16:30.998 ************************************ 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:30.998 * Looking for test storage... 00:16:30.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:16:30.998 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:31.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.258 --rc genhtml_branch_coverage=1 00:16:31.258 --rc genhtml_function_coverage=1 00:16:31.258 --rc genhtml_legend=1 00:16:31.258 --rc geninfo_all_blocks=1 00:16:31.258 --rc geninfo_unexecuted_blocks=1 00:16:31.258 00:16:31.258 ' 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:31.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.258 --rc genhtml_branch_coverage=1 00:16:31.258 --rc genhtml_function_coverage=1 00:16:31.258 --rc genhtml_legend=1 00:16:31.258 --rc geninfo_all_blocks=1 00:16:31.258 --rc geninfo_unexecuted_blocks=1 00:16:31.258 00:16:31.258 ' 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:31.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.258 --rc genhtml_branch_coverage=1 00:16:31.258 --rc genhtml_function_coverage=1 00:16:31.258 --rc genhtml_legend=1 00:16:31.258 --rc geninfo_all_blocks=1 00:16:31.258 --rc geninfo_unexecuted_blocks=1 00:16:31.258 00:16:31.258 ' 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:31.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.258 --rc genhtml_branch_coverage=1 00:16:31.258 --rc genhtml_function_coverage=1 00:16:31.258 --rc genhtml_legend=1 00:16:31.258 --rc geninfo_all_blocks=1 00:16:31.258 --rc geninfo_unexecuted_blocks=1 00:16:31.258 00:16:31.258 ' 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.258 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:31.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:31.259 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:33.794 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:33.794 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:33.794 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:33.795 Found net devices under 0000:09:00.0: cvl_0_0 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:33.795 Found net devices under 0000:09:00.1: cvl_0_1 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:33.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:16:33.795 00:16:33.795 --- 10.0.0.2 ping statistics --- 00:16:33.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.795 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:16:33.795 00:16:33.795 --- 10.0.0.1 ping statistics --- 00:16:33.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.795 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=565814 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 565814 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 565814 ']' 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.795 [2024-11-26 18:12:21.474832] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:16:33.795 [2024-11-26 18:12:21.474928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.795 [2024-11-26 18:12:21.545711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.795 [2024-11-26 18:12:21.604478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.795 [2024-11-26 18:12:21.604534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.795 [2024-11-26 18:12:21.604547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.795 [2024-11-26 18:12:21.604558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.795 [2024-11-26 18:12:21.604568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.795 [2024-11-26 18:12:21.606283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.795 [2024-11-26 18:12:21.606372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.795 [2024-11-26 18:12:21.606422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.795 [2024-11-26 18:12:21.606425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.795 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:33.795 "tick_rate": 2700000000, 00:16:33.795 "poll_groups": [ 00:16:33.795 { 00:16:33.795 "name": "nvmf_tgt_poll_group_000", 00:16:33.795 "admin_qpairs": 0, 00:16:33.795 "io_qpairs": 0, 00:16:33.795 "current_admin_qpairs": 0, 00:16:33.795 "current_io_qpairs": 0, 00:16:33.795 "pending_bdev_io": 0, 00:16:33.795 "completed_nvme_io": 0, 00:16:33.795 "transports": [] 00:16:33.796 }, 00:16:33.796 { 00:16:33.796 "name": "nvmf_tgt_poll_group_001", 00:16:33.796 "admin_qpairs": 0, 00:16:33.796 "io_qpairs": 0, 00:16:33.796 "current_admin_qpairs": 0, 00:16:33.796 "current_io_qpairs": 0, 00:16:33.796 "pending_bdev_io": 0, 00:16:33.796 "completed_nvme_io": 0, 00:16:33.796 "transports": [] 00:16:33.796 }, 00:16:33.796 { 00:16:33.796 "name": "nvmf_tgt_poll_group_002", 00:16:33.796 "admin_qpairs": 0, 00:16:33.796 "io_qpairs": 0, 00:16:33.796 "current_admin_qpairs": 0, 00:16:33.796 "current_io_qpairs": 0, 00:16:33.796 "pending_bdev_io": 0, 00:16:33.796 "completed_nvme_io": 0, 00:16:33.796 "transports": [] 00:16:33.796 }, 00:16:33.796 { 00:16:33.796 "name": "nvmf_tgt_poll_group_003", 00:16:33.796 "admin_qpairs": 0, 00:16:33.796 "io_qpairs": 0, 00:16:33.796 "current_admin_qpairs": 0, 00:16:33.796 "current_io_qpairs": 0, 00:16:33.796 "pending_bdev_io": 0, 00:16:33.796 "completed_nvme_io": 0, 00:16:33.796 "transports": [] 00:16:33.796 } 00:16:33.796 ] 00:16:33.796 }' 00:16:33.796 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:33.796 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:33.796 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:33.796 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.055 [2024-11-26 18:12:21.851196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.055 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:34.055 "tick_rate": 2700000000, 00:16:34.055 "poll_groups": [ 00:16:34.055 { 00:16:34.055 "name": "nvmf_tgt_poll_group_000", 00:16:34.055 "admin_qpairs": 0, 00:16:34.055 "io_qpairs": 0, 00:16:34.055 "current_admin_qpairs": 0, 00:16:34.055 "current_io_qpairs": 0, 00:16:34.055 "pending_bdev_io": 0, 00:16:34.055 "completed_nvme_io": 0, 00:16:34.055 "transports": [ 00:16:34.055 { 00:16:34.055 "trtype": "TCP" 00:16:34.055 } 00:16:34.055 ] 00:16:34.055 }, 00:16:34.055 { 00:16:34.055 "name": "nvmf_tgt_poll_group_001", 00:16:34.055 "admin_qpairs": 0, 00:16:34.056 "io_qpairs": 0, 00:16:34.056 "current_admin_qpairs": 0, 00:16:34.056 "current_io_qpairs": 0, 00:16:34.056 "pending_bdev_io": 0, 00:16:34.056 "completed_nvme_io": 0, 00:16:34.056 "transports": [ 00:16:34.056 { 00:16:34.056 "trtype": "TCP" 00:16:34.056 } 00:16:34.056 ] 00:16:34.056 }, 00:16:34.056 { 00:16:34.056 "name": "nvmf_tgt_poll_group_002", 00:16:34.056 "admin_qpairs": 0, 00:16:34.056 "io_qpairs": 0, 00:16:34.056 "current_admin_qpairs": 0, 00:16:34.056 "current_io_qpairs": 0, 00:16:34.056 "pending_bdev_io": 0, 00:16:34.056 "completed_nvme_io": 0, 00:16:34.056 "transports": [ 00:16:34.056 { 00:16:34.056 "trtype": "TCP" 00:16:34.056 } 00:16:34.056 ] 00:16:34.056 }, 00:16:34.056 { 00:16:34.056 "name": "nvmf_tgt_poll_group_003", 00:16:34.056 "admin_qpairs": 0, 00:16:34.056 "io_qpairs": 0, 00:16:34.056 "current_admin_qpairs": 0, 00:16:34.056 "current_io_qpairs": 0, 00:16:34.056 "pending_bdev_io": 0, 00:16:34.056 "completed_nvme_io": 0, 00:16:34.056 "transports": [ 00:16:34.056 { 00:16:34.056 "trtype": "TCP" 00:16:34.056 } 00:16:34.056 ] 00:16:34.056 } 00:16:34.056 ] 00:16:34.056 }' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 Malloc1 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.056 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.056 [2024-11-26 18:12:22.026331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:34.056 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:16:34.056 [2024-11-26 18:12:22.048991] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:16:34.314 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:34.314 could not add new controller: failed to write to nvme-fabrics device 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.314 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.880 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.880 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:34.880 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.880 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:34.880 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:36.777 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:36.777 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:36.777 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.777 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:36.777 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.777 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:36.777 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:37.035 [2024-11-26 18:12:24.921241] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:16:37.035 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:37.035 could not add new controller: failed to write to nvme-fabrics device 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.035 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:37.599 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:37.599 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:37.599 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.599 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:37.599 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:40.125 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:40.125 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.126 [2024-11-26 18:12:27.717145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.126 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.691 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.691 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:40.691 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.691 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:40.691 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.591 [2024-11-26 18:12:30.543065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.591 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.525 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.525 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.525 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.525 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.525 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.423 [2024-11-26 18:12:33.375259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.423 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:45.424 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.424 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:45.424 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.424 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:46.357 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.357 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:46.357 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.357 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:46.357 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:48.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.314 [2024-11-26 18:12:36.227727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.314 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.247 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:49.247 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:49.247 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.247 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:49.247 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:51.144 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:51.144 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:51.144 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.144 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:51.144 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.144 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:51.144 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:51.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.144 [2024-11-26 18:12:39.090506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:51.144 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.145 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.145 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.145 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:51.145 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.145 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.145 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.145 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.078 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.078 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:52.078 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.078 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:52.078 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.976 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 [2024-11-26 18:12:41.923971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 [2024-11-26 18:12:41.972049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.977 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.235 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.235 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.235 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.235 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.235 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.235 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 [2024-11-26 18:12:42.020206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 [2024-11-26 18:12:42.068515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 [2024-11-26 18:12:42.116559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.236 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:54.236 "tick_rate": 2700000000, 00:16:54.236 "poll_groups": [ 00:16:54.236 { 00:16:54.236 "name": "nvmf_tgt_poll_group_000", 00:16:54.236 "admin_qpairs": 2, 00:16:54.236 "io_qpairs": 84, 00:16:54.236 "current_admin_qpairs": 0, 00:16:54.236 "current_io_qpairs": 0, 00:16:54.236 "pending_bdev_io": 0, 00:16:54.236 "completed_nvme_io": 204, 00:16:54.236 "transports": [ 00:16:54.236 { 00:16:54.236 "trtype": "TCP" 00:16:54.236 } 00:16:54.236 ] 00:16:54.236 }, 00:16:54.236 { 00:16:54.236 "name": "nvmf_tgt_poll_group_001", 00:16:54.236 "admin_qpairs": 2, 00:16:54.236 "io_qpairs": 84, 00:16:54.236 "current_admin_qpairs": 0, 00:16:54.236 "current_io_qpairs": 0, 00:16:54.236 "pending_bdev_io": 0, 00:16:54.236 "completed_nvme_io": 177, 00:16:54.236 "transports": [ 00:16:54.236 { 00:16:54.237 "trtype": "TCP" 00:16:54.237 } 00:16:54.237 ] 00:16:54.237 }, 00:16:54.237 { 00:16:54.237 "name": "nvmf_tgt_poll_group_002", 00:16:54.237 "admin_qpairs": 1, 00:16:54.237 "io_qpairs": 84, 00:16:54.237 "current_admin_qpairs": 0, 00:16:54.237 "current_io_qpairs": 0, 00:16:54.237 "pending_bdev_io": 0, 00:16:54.237 "completed_nvme_io": 137, 00:16:54.237 "transports": [ 00:16:54.237 { 00:16:54.237 "trtype": "TCP" 00:16:54.237 } 00:16:54.237 ] 00:16:54.237 }, 00:16:54.237 { 00:16:54.237 "name": "nvmf_tgt_poll_group_003", 00:16:54.237 "admin_qpairs": 2, 00:16:54.237 "io_qpairs": 84, 00:16:54.237 "current_admin_qpairs": 0, 00:16:54.237 "current_io_qpairs": 0, 00:16:54.237 "pending_bdev_io": 0, 00:16:54.237 "completed_nvme_io": 168, 00:16:54.237 "transports": [ 00:16:54.237 { 00:16:54.237 "trtype": "TCP" 00:16:54.237 } 00:16:54.237 ] 00:16:54.237 } 00:16:54.237 ] 00:16:54.237 }' 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:54.237 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:54.495 rmmod nvme_tcp 00:16:54.495 rmmod nvme_fabrics 00:16:54.495 rmmod nvme_keyring 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 565814 ']' 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 565814 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 565814 ']' 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 565814 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565814 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565814' 00:16:54.495 killing process with pid 565814 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 565814 00:16:54.495 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 565814 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:54.753 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.658 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:56.658 00:16:56.658 real 0m25.708s 00:16:56.658 user 1m23.284s 00:16:56.658 sys 0m4.293s 00:16:56.658 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.658 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.658 ************************************ 00:16:56.658 END TEST nvmf_rpc 00:16:56.658 ************************************ 00:16:56.658 18:12:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.918 ************************************ 00:16:56.918 START TEST nvmf_invalid 00:16:56.918 ************************************ 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:56.918 * Looking for test storage... 00:16:56.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:56.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.918 --rc genhtml_branch_coverage=1 00:16:56.918 --rc genhtml_function_coverage=1 00:16:56.918 --rc genhtml_legend=1 00:16:56.918 --rc geninfo_all_blocks=1 00:16:56.918 --rc geninfo_unexecuted_blocks=1 00:16:56.918 00:16:56.918 ' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:56.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.918 --rc genhtml_branch_coverage=1 00:16:56.918 --rc genhtml_function_coverage=1 00:16:56.918 --rc genhtml_legend=1 00:16:56.918 --rc geninfo_all_blocks=1 00:16:56.918 --rc geninfo_unexecuted_blocks=1 00:16:56.918 00:16:56.918 ' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:56.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.918 --rc genhtml_branch_coverage=1 00:16:56.918 --rc genhtml_function_coverage=1 00:16:56.918 --rc genhtml_legend=1 00:16:56.918 --rc geninfo_all_blocks=1 00:16:56.918 --rc geninfo_unexecuted_blocks=1 00:16:56.918 00:16:56.918 ' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:56.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.918 --rc genhtml_branch_coverage=1 00:16:56.918 --rc genhtml_function_coverage=1 00:16:56.918 --rc genhtml_legend=1 00:16:56.918 --rc geninfo_all_blocks=1 00:16:56.918 --rc geninfo_unexecuted_blocks=1 00:16:56.918 00:16:56.918 ' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.918 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.919 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:59.453 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:59.453 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:59.454 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:59.454 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:59.454 Found net devices under 0000:09:00.0: cvl_0_0 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:59.454 Found net devices under 0000:09:00.1: cvl_0_1 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.454 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:59.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:16:59.455 00:16:59.455 --- 10.0.0.2 ping statistics --- 00:16:59.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.455 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:16:59.455 00:16:59.455 --- 10.0.0.1 ping statistics --- 00:16:59.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.455 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=570426 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 570426 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 570426 ']' 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.455 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:59.455 [2024-11-26 18:12:47.287933] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:16:59.455 [2024-11-26 18:12:47.288033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.455 [2024-11-26 18:12:47.360013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:59.455 [2024-11-26 18:12:47.414807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.455 [2024-11-26 18:12:47.414861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.455 [2024-11-26 18:12:47.414888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.455 [2024-11-26 18:12:47.414899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.455 [2024-11-26 18:12:47.414908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.455 [2024-11-26 18:12:47.416531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.455 [2024-11-26 18:12:47.416607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.455 [2024-11-26 18:12:47.416727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:59.455 [2024-11-26 18:12:47.416730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.713 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.713 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:16:59.713 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:59.713 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.713 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:59.713 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.713 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:59.713 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8137 00:16:59.971 [2024-11-26 18:12:47.824485] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:59.971 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:59.971 { 00:16:59.971 "nqn": "nqn.2016-06.io.spdk:cnode8137", 00:16:59.971 "tgt_name": "foobar", 00:16:59.971 "method": "nvmf_create_subsystem", 00:16:59.971 "req_id": 1 00:16:59.971 } 00:16:59.971 Got JSON-RPC error response 00:16:59.971 response: 00:16:59.971 { 00:16:59.971 "code": -32603, 00:16:59.971 "message": "Unable to find target foobar" 00:16:59.971 }' 00:16:59.971 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:59.971 { 00:16:59.971 "nqn": "nqn.2016-06.io.spdk:cnode8137", 00:16:59.971 "tgt_name": "foobar", 00:16:59.971 "method": "nvmf_create_subsystem", 00:16:59.971 "req_id": 1 00:16:59.971 } 00:16:59.971 Got JSON-RPC error response 00:16:59.971 response: 00:16:59.971 { 00:16:59.971 "code": -32603, 00:16:59.971 "message": "Unable to find target foobar" 00:16:59.971 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:59.971 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:59.971 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3971 00:17:00.229 [2024-11-26 18:12:48.089397] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3971: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:00.229 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:00.229 { 00:17:00.229 "nqn": "nqn.2016-06.io.spdk:cnode3971", 00:17:00.229 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:00.229 "method": "nvmf_create_subsystem", 00:17:00.229 "req_id": 1 00:17:00.229 } 00:17:00.229 Got JSON-RPC error response 00:17:00.229 response: 00:17:00.229 { 00:17:00.229 "code": -32602, 00:17:00.229 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:00.229 }' 00:17:00.229 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:00.229 { 00:17:00.229 "nqn": "nqn.2016-06.io.spdk:cnode3971", 00:17:00.229 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:00.229 "method": "nvmf_create_subsystem", 00:17:00.229 "req_id": 1 00:17:00.229 } 00:17:00.229 Got JSON-RPC error response 00:17:00.229 response: 00:17:00.229 { 00:17:00.229 "code": -32602, 00:17:00.229 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:00.229 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:00.229 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:00.229 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18986 00:17:00.488 [2024-11-26 18:12:48.358270] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18986: invalid model number 'SPDK_Controller' 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:00.488 { 00:17:00.488 "nqn": "nqn.2016-06.io.spdk:cnode18986", 00:17:00.488 "model_number": "SPDK_Controller\u001f", 00:17:00.488 "method": "nvmf_create_subsystem", 00:17:00.488 "req_id": 1 00:17:00.488 } 00:17:00.488 Got JSON-RPC error response 00:17:00.488 response: 00:17:00.488 { 00:17:00.488 "code": -32602, 00:17:00.488 "message": "Invalid MN SPDK_Controller\u001f" 00:17:00.488 }' 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:00.488 { 00:17:00.488 "nqn": "nqn.2016-06.io.spdk:cnode18986", 00:17:00.488 "model_number": "SPDK_Controller\u001f", 00:17:00.488 "method": "nvmf_create_subsystem", 00:17:00.488 "req_id": 1 00:17:00.488 } 00:17:00.488 Got JSON-RPC error response 00:17:00.488 response: 00:17:00.488 { 00:17:00.488 "code": -32602, 00:17:00.488 "message": "Invalid MN SPDK_Controller\u001f" 00:17:00.488 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:00.488 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'PGoMsHjN]aHdP-v2z6wO6' 00:17:00.489 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'PGoMsHjN]aHdP-v2z6wO6' nqn.2016-06.io.spdk:cnode16828 00:17:00.747 [2024-11-26 18:12:48.711491] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16828: invalid serial number 'PGoMsHjN]aHdP-v2z6wO6' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:00.747 { 00:17:00.747 "nqn": "nqn.2016-06.io.spdk:cnode16828", 00:17:00.747 "serial_number": "PGoMsHjN]aHdP-v2z6wO6", 00:17:00.747 "method": "nvmf_create_subsystem", 00:17:00.747 "req_id": 1 00:17:00.747 } 00:17:00.747 Got JSON-RPC error response 00:17:00.747 response: 00:17:00.747 { 00:17:00.747 "code": -32602, 00:17:00.747 "message": "Invalid SN PGoMsHjN]aHdP-v2z6wO6" 00:17:00.747 }' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:00.747 { 00:17:00.747 "nqn": "nqn.2016-06.io.spdk:cnode16828", 00:17:00.747 "serial_number": "PGoMsHjN]aHdP-v2z6wO6", 00:17:00.747 "method": "nvmf_create_subsystem", 00:17:00.747 "req_id": 1 00:17:00.747 } 00:17:00.747 Got JSON-RPC error response 00:17:00.747 response: 00:17:00.747 { 00:17:00.747 "code": -32602, 00:17:00.747 "message": "Invalid SN PGoMsHjN]aHdP-v2z6wO6" 00:17:00.747 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:00.747 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:01.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:17:01.008 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'T>\=p7(9flRR/_>A,@G)w%M~Ha.F?{F&nxDEJtE*4' 00:17:01.009 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'T>\=p7(9flRR/_>A,@G)w%M~Ha.F?{F&nxDEJtE*4' nqn.2016-06.io.spdk:cnode5256 00:17:01.266 [2024-11-26 18:12:49.136863] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5256: invalid model number 'T>\=p7(9flRR/_>A,@G)w%M~Ha.F?{F&nxDEJtE*4' 00:17:01.266 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:01.266 { 00:17:01.266 "nqn": "nqn.2016-06.io.spdk:cnode5256", 00:17:01.266 "model_number": "T>\\=p7(9flRR/_>A,@G)w%M~Ha.F?{F&nxDEJtE*4", 00:17:01.266 "method": "nvmf_create_subsystem", 00:17:01.266 "req_id": 1 00:17:01.266 } 00:17:01.266 Got JSON-RPC error response 00:17:01.266 response: 00:17:01.266 { 00:17:01.267 "code": -32602, 00:17:01.267 "message": "Invalid MN T>\\=p7(9flRR/_>A,@G)w%M~Ha.F?{F&nxDEJtE*4" 00:17:01.267 }' 00:17:01.267 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:01.267 { 00:17:01.267 "nqn": "nqn.2016-06.io.spdk:cnode5256", 00:17:01.267 "model_number": "T>\\=p7(9flRR/_>A,@G)w%M~Ha.F?{F&nxDEJtE*4", 00:17:01.267 "method": "nvmf_create_subsystem", 00:17:01.267 "req_id": 1 00:17:01.267 } 00:17:01.267 Got JSON-RPC error response 00:17:01.267 response: 00:17:01.267 { 00:17:01.267 "code": -32602, 00:17:01.267 "message": "Invalid MN T>\\=p7(9flRR/_>A,@G)w%M~Ha.F?{F&nxDEJtE*4" 00:17:01.267 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:01.267 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:01.523 [2024-11-26 18:12:49.401832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.523 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:01.780 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:01.780 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:01.780 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:01.780 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:01.780 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:02.038 [2024-11-26 18:12:49.987755] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:02.038 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:02.038 { 00:17:02.038 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:02.038 "listen_address": { 00:17:02.038 "trtype": "tcp", 00:17:02.038 "traddr": "", 00:17:02.038 "trsvcid": "4421" 00:17:02.038 }, 00:17:02.038 "method": "nvmf_subsystem_remove_listener", 00:17:02.038 "req_id": 1 00:17:02.038 } 00:17:02.038 Got JSON-RPC error response 00:17:02.038 response: 00:17:02.038 { 00:17:02.038 "code": -32602, 00:17:02.038 "message": "Invalid parameters" 00:17:02.038 }' 00:17:02.038 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:02.038 { 00:17:02.038 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:02.038 "listen_address": { 00:17:02.038 "trtype": "tcp", 00:17:02.038 "traddr": "", 00:17:02.038 "trsvcid": "4421" 00:17:02.038 }, 00:17:02.038 "method": "nvmf_subsystem_remove_listener", 00:17:02.038 "req_id": 1 00:17:02.038 } 00:17:02.038 Got JSON-RPC error response 00:17:02.038 response: 00:17:02.038 { 00:17:02.038 "code": -32602, 00:17:02.038 "message": "Invalid parameters" 00:17:02.038 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:02.038 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13754 -i 0 00:17:02.296 [2024-11-26 18:12:50.268727] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13754: invalid cntlid range [0-65519] 00:17:02.296 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:02.296 { 00:17:02.296 "nqn": "nqn.2016-06.io.spdk:cnode13754", 00:17:02.296 "min_cntlid": 0, 00:17:02.296 "method": "nvmf_create_subsystem", 00:17:02.296 "req_id": 1 00:17:02.296 } 00:17:02.296 Got JSON-RPC error response 00:17:02.296 response: 00:17:02.296 { 00:17:02.296 "code": -32602, 00:17:02.296 "message": "Invalid cntlid range [0-65519]" 00:17:02.296 }' 00:17:02.296 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:02.296 { 00:17:02.296 "nqn": "nqn.2016-06.io.spdk:cnode13754", 00:17:02.296 "min_cntlid": 0, 00:17:02.296 "method": "nvmf_create_subsystem", 00:17:02.296 "req_id": 1 00:17:02.296 } 00:17:02.296 Got JSON-RPC error response 00:17:02.296 response: 00:17:02.296 { 00:17:02.296 "code": -32602, 00:17:02.296 "message": "Invalid cntlid range [0-65519]" 00:17:02.296 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:02.296 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9933 -i 65520 00:17:02.554 [2024-11-26 18:12:50.541643] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9933: invalid cntlid range [65520-65519] 00:17:02.554 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:02.554 { 00:17:02.554 "nqn": "nqn.2016-06.io.spdk:cnode9933", 00:17:02.554 "min_cntlid": 65520, 00:17:02.554 "method": "nvmf_create_subsystem", 00:17:02.554 "req_id": 1 00:17:02.554 } 00:17:02.554 Got JSON-RPC error response 00:17:02.554 response: 00:17:02.554 { 00:17:02.554 "code": -32602, 00:17:02.554 "message": "Invalid cntlid range [65520-65519]" 00:17:02.554 }' 00:17:02.554 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:02.554 { 00:17:02.554 "nqn": "nqn.2016-06.io.spdk:cnode9933", 00:17:02.554 "min_cntlid": 65520, 00:17:02.554 "method": "nvmf_create_subsystem", 00:17:02.554 "req_id": 1 00:17:02.554 } 00:17:02.554 Got JSON-RPC error response 00:17:02.554 response: 00:17:02.554 { 00:17:02.554 "code": -32602, 00:17:02.554 "message": "Invalid cntlid range [65520-65519]" 00:17:02.554 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:02.554 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30100 -I 0 00:17:02.812 [2024-11-26 18:12:50.806498] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30100: invalid cntlid range [1-0] 00:17:03.069 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:03.069 { 00:17:03.069 "nqn": "nqn.2016-06.io.spdk:cnode30100", 00:17:03.069 "max_cntlid": 0, 00:17:03.069 "method": "nvmf_create_subsystem", 00:17:03.069 "req_id": 1 00:17:03.069 } 00:17:03.069 Got JSON-RPC error response 00:17:03.069 response: 00:17:03.069 { 00:17:03.069 "code": -32602, 00:17:03.069 "message": "Invalid cntlid range [1-0]" 00:17:03.069 }' 00:17:03.069 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:03.069 { 00:17:03.069 "nqn": "nqn.2016-06.io.spdk:cnode30100", 00:17:03.069 "max_cntlid": 0, 00:17:03.069 "method": "nvmf_create_subsystem", 00:17:03.069 "req_id": 1 00:17:03.069 } 00:17:03.069 Got JSON-RPC error response 00:17:03.069 response: 00:17:03.069 { 00:17:03.069 "code": -32602, 00:17:03.069 "message": "Invalid cntlid range [1-0]" 00:17:03.069 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.069 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2054 -I 65520 00:17:03.327 [2024-11-26 18:12:51.087434] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2054: invalid cntlid range [1-65520] 00:17:03.327 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:03.327 { 00:17:03.327 "nqn": "nqn.2016-06.io.spdk:cnode2054", 00:17:03.327 "max_cntlid": 65520, 00:17:03.327 "method": "nvmf_create_subsystem", 00:17:03.327 "req_id": 1 00:17:03.327 } 00:17:03.327 Got JSON-RPC error response 00:17:03.327 response: 00:17:03.327 { 00:17:03.327 "code": -32602, 00:17:03.327 "message": "Invalid cntlid range [1-65520]" 00:17:03.327 }' 00:17:03.327 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:03.327 { 00:17:03.327 "nqn": "nqn.2016-06.io.spdk:cnode2054", 00:17:03.327 "max_cntlid": 65520, 00:17:03.327 "method": "nvmf_create_subsystem", 00:17:03.327 "req_id": 1 00:17:03.327 } 00:17:03.327 Got JSON-RPC error response 00:17:03.327 response: 00:17:03.327 { 00:17:03.327 "code": -32602, 00:17:03.327 "message": "Invalid cntlid range [1-65520]" 00:17:03.327 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.327 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9130 -i 6 -I 5 00:17:03.585 [2024-11-26 18:12:51.368383] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9130: invalid cntlid range [6-5] 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:03.585 { 00:17:03.585 "nqn": "nqn.2016-06.io.spdk:cnode9130", 00:17:03.585 "min_cntlid": 6, 00:17:03.585 "max_cntlid": 5, 00:17:03.585 "method": "nvmf_create_subsystem", 00:17:03.585 "req_id": 1 00:17:03.585 } 00:17:03.585 Got JSON-RPC error response 00:17:03.585 response: 00:17:03.585 { 00:17:03.585 "code": -32602, 00:17:03.585 "message": "Invalid cntlid range [6-5]" 00:17:03.585 }' 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:03.585 { 00:17:03.585 "nqn": "nqn.2016-06.io.spdk:cnode9130", 00:17:03.585 "min_cntlid": 6, 00:17:03.585 "max_cntlid": 5, 00:17:03.585 "method": "nvmf_create_subsystem", 00:17:03.585 "req_id": 1 00:17:03.585 } 00:17:03.585 Got JSON-RPC error response 00:17:03.585 response: 00:17:03.585 { 00:17:03.585 "code": -32602, 00:17:03.585 "message": "Invalid cntlid range [6-5]" 00:17:03.585 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:03.585 { 00:17:03.585 "name": "foobar", 00:17:03.585 "method": "nvmf_delete_target", 00:17:03.585 "req_id": 1 00:17:03.585 } 00:17:03.585 Got JSON-RPC error response 00:17:03.585 response: 00:17:03.585 { 00:17:03.585 "code": -32602, 00:17:03.585 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:03.585 }' 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:03.585 { 00:17:03.585 "name": "foobar", 00:17:03.585 "method": "nvmf_delete_target", 00:17:03.585 "req_id": 1 00:17:03.585 } 00:17:03.585 Got JSON-RPC error response 00:17:03.585 response: 00:17:03.585 { 00:17:03.585 "code": -32602, 00:17:03.585 "message": "The specified target doesn't exist, cannot delete it." 00:17:03.585 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.585 rmmod nvme_tcp 00:17:03.585 rmmod nvme_fabrics 00:17:03.585 rmmod nvme_keyring 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 570426 ']' 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 570426 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 570426 ']' 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 570426 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.585 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 570426 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 570426' 00:17:03.844 killing process with pid 570426 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 570426 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 570426 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.844 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:06.390 00:17:06.390 real 0m9.172s 00:17:06.390 user 0m21.717s 00:17:06.390 sys 0m2.599s 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:06.390 ************************************ 00:17:06.390 END TEST nvmf_invalid 00:17:06.390 ************************************ 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.390 ************************************ 00:17:06.390 START TEST nvmf_connect_stress 00:17:06.390 ************************************ 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:06.390 * Looking for test storage... 00:17:06.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:06.390 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.390 --rc genhtml_branch_coverage=1 00:17:06.390 --rc genhtml_function_coverage=1 00:17:06.390 --rc genhtml_legend=1 00:17:06.390 --rc geninfo_all_blocks=1 00:17:06.390 --rc geninfo_unexecuted_blocks=1 00:17:06.390 00:17:06.390 ' 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.390 --rc genhtml_branch_coverage=1 00:17:06.390 --rc genhtml_function_coverage=1 00:17:06.390 --rc genhtml_legend=1 00:17:06.390 --rc geninfo_all_blocks=1 00:17:06.390 --rc geninfo_unexecuted_blocks=1 00:17:06.390 00:17:06.390 ' 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.390 --rc genhtml_branch_coverage=1 00:17:06.390 --rc genhtml_function_coverage=1 00:17:06.390 --rc genhtml_legend=1 00:17:06.390 --rc geninfo_all_blocks=1 00:17:06.390 --rc geninfo_unexecuted_blocks=1 00:17:06.390 00:17:06.390 ' 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:06.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.390 --rc genhtml_branch_coverage=1 00:17:06.390 --rc genhtml_function_coverage=1 00:17:06.390 --rc genhtml_legend=1 00:17:06.390 --rc geninfo_all_blocks=1 00:17:06.390 --rc geninfo_unexecuted_blocks=1 00:17:06.390 00:17:06.390 ' 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.390 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:06.391 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.926 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:08.927 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:08.927 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:08.927 Found net devices under 0000:09:00.0: cvl_0_0 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:08.927 Found net devices under 0000:09:00.1: cvl_0_1 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:08.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:17:08.927 00:17:08.927 --- 10.0.0.2 ping statistics --- 00:17:08.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.927 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:17:08.927 00:17:08.927 --- 10.0.0.1 ping statistics --- 00:17:08.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.927 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=573078 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 573078 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 573078 ']' 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.927 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.927 [2024-11-26 18:12:56.548864] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:17:08.928 [2024-11-26 18:12:56.548960] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.928 [2024-11-26 18:12:56.622755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.928 [2024-11-26 18:12:56.681797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.928 [2024-11-26 18:12:56.681841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.928 [2024-11-26 18:12:56.681869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.928 [2024-11-26 18:12:56.681881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.928 [2024-11-26 18:12:56.681890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.928 [2024-11-26 18:12:56.683435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.928 [2024-11-26 18:12:56.683486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.928 [2024-11-26 18:12:56.683490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.928 [2024-11-26 18:12:56.832752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.928 [2024-11-26 18:12:56.850042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:08.928 NULL1 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=573192 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.928 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.493 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.493 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:09.493 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.493 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.493 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.750 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.750 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:09.750 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.750 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.750 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.007 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.007 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:10.007 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.007 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.007 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.265 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.265 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:10.265 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.265 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.265 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.523 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.523 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:10.523 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.523 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.523 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.089 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.089 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:11.089 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.089 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.089 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.346 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.346 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:11.346 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.346 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.346 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.603 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.603 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:11.603 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.603 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.603 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.861 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.861 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:11.861 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.861 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.861 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.425 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.425 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:12.425 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.425 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.425 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.683 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.683 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:12.683 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.683 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.683 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.940 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.940 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:12.940 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.940 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.940 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.198 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.198 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:13.198 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.198 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.198 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.455 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.455 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:13.455 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.455 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.455 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.048 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.048 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:14.048 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.048 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.048 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.048 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.048 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:14.048 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.048 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.048 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.613 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.613 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:14.613 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.613 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.613 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.869 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.869 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:14.869 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.869 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.870 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.125 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.125 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:15.125 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.125 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.125 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.415 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.415 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:15.415 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.415 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.415 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.716 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.716 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:15.717 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.717 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.717 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.974 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.974 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:15.974 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.974 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.974 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.537 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.537 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:16.537 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.537 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.537 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.794 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.794 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:16.794 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.794 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.794 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.051 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.051 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:17.051 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.051 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.051 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.308 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.308 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:17.308 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.308 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.308 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.874 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.874 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:17.874 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.874 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.874 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.131 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.131 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:18.131 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.131 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.132 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.389 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.389 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:18.389 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.389 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.389 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.646 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.646 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:18.646 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.646 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.646 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.904 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.904 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:18.904 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.904 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.904 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.162 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 573192 00:17:19.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (573192) - No such process 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 573192 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.420 rmmod nvme_tcp 00:17:19.420 rmmod nvme_fabrics 00:17:19.420 rmmod nvme_keyring 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 573078 ']' 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 573078 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 573078 ']' 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 573078 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 573078 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 573078' 00:17:19.420 killing process with pid 573078 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 573078 00:17:19.420 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 573078 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.679 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.583 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:21.583 00:17:21.583 real 0m15.661s 00:17:21.583 user 0m38.836s 00:17:21.583 sys 0m6.035s 00:17:21.583 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.583 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.583 ************************************ 00:17:21.583 END TEST nvmf_connect_stress 00:17:21.583 ************************************ 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:21.842 ************************************ 00:17:21.842 START TEST nvmf_fused_ordering 00:17:21.842 ************************************ 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:21.842 * Looking for test storage... 00:17:21.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:21.842 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:21.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.842 --rc genhtml_branch_coverage=1 00:17:21.842 --rc genhtml_function_coverage=1 00:17:21.842 --rc genhtml_legend=1 00:17:21.842 --rc geninfo_all_blocks=1 00:17:21.842 --rc geninfo_unexecuted_blocks=1 00:17:21.843 00:17:21.843 ' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.843 --rc genhtml_branch_coverage=1 00:17:21.843 --rc genhtml_function_coverage=1 00:17:21.843 --rc genhtml_legend=1 00:17:21.843 --rc geninfo_all_blocks=1 00:17:21.843 --rc geninfo_unexecuted_blocks=1 00:17:21.843 00:17:21.843 ' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.843 --rc genhtml_branch_coverage=1 00:17:21.843 --rc genhtml_function_coverage=1 00:17:21.843 --rc genhtml_legend=1 00:17:21.843 --rc geninfo_all_blocks=1 00:17:21.843 --rc geninfo_unexecuted_blocks=1 00:17:21.843 00:17:21.843 ' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:21.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:21.843 --rc genhtml_branch_coverage=1 00:17:21.843 --rc genhtml_function_coverage=1 00:17:21.843 --rc genhtml_legend=1 00:17:21.843 --rc geninfo_all_blocks=1 00:17:21.843 --rc geninfo_unexecuted_blocks=1 00:17:21.843 00:17:21.843 ' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:21.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:21.843 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:24.380 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:24.381 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:24.381 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:24.381 Found net devices under 0000:09:00.0: cvl_0_0 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:24.381 Found net devices under 0000:09:00.1: cvl_0_1 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:24.381 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:24.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:17:24.381 00:17:24.381 --- 10.0.0.2 ping statistics --- 00:17:24.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.381 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:17:24.381 00:17:24.381 --- 10.0.0.1 ping statistics --- 00:17:24.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.381 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=576376 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 576376 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 576376 ']' 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.381 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.381 [2024-11-26 18:13:12.127500] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:17:24.381 [2024-11-26 18:13:12.127593] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.382 [2024-11-26 18:13:12.196739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.382 [2024-11-26 18:13:12.252677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.382 [2024-11-26 18:13:12.252731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.382 [2024-11-26 18:13:12.252760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.382 [2024-11-26 18:13:12.252771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.382 [2024-11-26 18:13:12.252781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.382 [2024-11-26 18:13:12.253275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.382 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.382 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:24.382 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.382 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.382 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 [2024-11-26 18:13:12.398783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 [2024-11-26 18:13:12.414918] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 NULL1 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.640 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:24.640 [2024-11-26 18:13:12.462869] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:17:24.640 [2024-11-26 18:13:12.462912] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576401 ] 00:17:24.899 Attached to nqn.2016-06.io.spdk:cnode1 00:17:24.899 Namespace ID: 1 size: 1GB 00:17:24.899 fused_ordering(0) 00:17:24.899 fused_ordering(1) 00:17:24.899 fused_ordering(2) 00:17:24.899 fused_ordering(3) 00:17:24.899 fused_ordering(4) 00:17:24.899 fused_ordering(5) 00:17:24.899 fused_ordering(6) 00:17:24.899 fused_ordering(7) 00:17:24.899 fused_ordering(8) 00:17:24.899 fused_ordering(9) 00:17:24.899 fused_ordering(10) 00:17:24.899 fused_ordering(11) 00:17:24.899 fused_ordering(12) 00:17:24.899 fused_ordering(13) 00:17:24.899 fused_ordering(14) 00:17:24.899 fused_ordering(15) 00:17:24.899 fused_ordering(16) 00:17:24.899 fused_ordering(17) 00:17:24.899 fused_ordering(18) 00:17:24.899 fused_ordering(19) 00:17:24.899 fused_ordering(20) 00:17:24.899 fused_ordering(21) 00:17:24.899 fused_ordering(22) 00:17:24.899 fused_ordering(23) 00:17:24.899 fused_ordering(24) 00:17:24.899 fused_ordering(25) 00:17:24.899 fused_ordering(26) 00:17:24.899 fused_ordering(27) 00:17:24.899 fused_ordering(28) 00:17:24.899 fused_ordering(29) 00:17:24.899 fused_ordering(30) 00:17:24.899 fused_ordering(31) 00:17:24.899 fused_ordering(32) 00:17:24.899 fused_ordering(33) 00:17:24.899 fused_ordering(34) 00:17:24.899 fused_ordering(35) 00:17:24.899 fused_ordering(36) 00:17:24.899 fused_ordering(37) 00:17:24.899 fused_ordering(38) 00:17:24.899 fused_ordering(39) 00:17:24.899 fused_ordering(40) 00:17:24.899 fused_ordering(41) 00:17:24.899 fused_ordering(42) 00:17:24.899 fused_ordering(43) 00:17:24.899 fused_ordering(44) 00:17:24.899 fused_ordering(45) 00:17:24.899 fused_ordering(46) 00:17:24.899 fused_ordering(47) 00:17:24.899 fused_ordering(48) 00:17:24.899 fused_ordering(49) 00:17:24.899 fused_ordering(50) 00:17:24.899 fused_ordering(51) 00:17:24.899 fused_ordering(52) 00:17:24.899 fused_ordering(53) 00:17:24.899 fused_ordering(54) 00:17:24.899 fused_ordering(55) 00:17:24.899 fused_ordering(56) 00:17:24.899 fused_ordering(57) 00:17:24.899 fused_ordering(58) 00:17:24.899 fused_ordering(59) 00:17:24.899 fused_ordering(60) 00:17:24.899 fused_ordering(61) 00:17:24.899 fused_ordering(62) 00:17:24.899 fused_ordering(63) 00:17:24.899 fused_ordering(64) 00:17:24.899 fused_ordering(65) 00:17:24.899 fused_ordering(66) 00:17:24.899 fused_ordering(67) 00:17:24.899 fused_ordering(68) 00:17:24.899 fused_ordering(69) 00:17:24.899 fused_ordering(70) 00:17:24.899 fused_ordering(71) 00:17:24.899 fused_ordering(72) 00:17:24.899 fused_ordering(73) 00:17:24.899 fused_ordering(74) 00:17:24.899 fused_ordering(75) 00:17:24.899 fused_ordering(76) 00:17:24.899 fused_ordering(77) 00:17:24.899 fused_ordering(78) 00:17:24.899 fused_ordering(79) 00:17:24.899 fused_ordering(80) 00:17:24.899 fused_ordering(81) 00:17:24.899 fused_ordering(82) 00:17:24.899 fused_ordering(83) 00:17:24.899 fused_ordering(84) 00:17:24.899 fused_ordering(85) 00:17:24.899 fused_ordering(86) 00:17:24.899 fused_ordering(87) 00:17:24.899 fused_ordering(88) 00:17:24.899 fused_ordering(89) 00:17:24.899 fused_ordering(90) 00:17:24.899 fused_ordering(91) 00:17:24.899 fused_ordering(92) 00:17:24.899 fused_ordering(93) 00:17:24.899 fused_ordering(94) 00:17:24.899 fused_ordering(95) 00:17:24.899 fused_ordering(96) 00:17:24.899 fused_ordering(97) 00:17:24.899 fused_ordering(98) 00:17:24.899 fused_ordering(99) 00:17:24.899 fused_ordering(100) 00:17:24.899 fused_ordering(101) 00:17:24.899 fused_ordering(102) 00:17:24.899 fused_ordering(103) 00:17:24.899 fused_ordering(104) 00:17:24.899 fused_ordering(105) 00:17:24.899 fused_ordering(106) 00:17:24.899 fused_ordering(107) 00:17:24.899 fused_ordering(108) 00:17:24.899 fused_ordering(109) 00:17:24.899 fused_ordering(110) 00:17:24.899 fused_ordering(111) 00:17:24.899 fused_ordering(112) 00:17:24.899 fused_ordering(113) 00:17:24.899 fused_ordering(114) 00:17:24.899 fused_ordering(115) 00:17:24.899 fused_ordering(116) 00:17:24.899 fused_ordering(117) 00:17:24.899 fused_ordering(118) 00:17:24.899 fused_ordering(119) 00:17:24.899 fused_ordering(120) 00:17:24.899 fused_ordering(121) 00:17:24.899 fused_ordering(122) 00:17:24.899 fused_ordering(123) 00:17:24.899 fused_ordering(124) 00:17:24.899 fused_ordering(125) 00:17:24.899 fused_ordering(126) 00:17:24.899 fused_ordering(127) 00:17:24.899 fused_ordering(128) 00:17:24.899 fused_ordering(129) 00:17:24.899 fused_ordering(130) 00:17:24.899 fused_ordering(131) 00:17:24.899 fused_ordering(132) 00:17:24.899 fused_ordering(133) 00:17:24.899 fused_ordering(134) 00:17:24.899 fused_ordering(135) 00:17:24.899 fused_ordering(136) 00:17:24.899 fused_ordering(137) 00:17:24.899 fused_ordering(138) 00:17:24.899 fused_ordering(139) 00:17:24.899 fused_ordering(140) 00:17:24.899 fused_ordering(141) 00:17:24.899 fused_ordering(142) 00:17:24.899 fused_ordering(143) 00:17:24.899 fused_ordering(144) 00:17:24.899 fused_ordering(145) 00:17:24.899 fused_ordering(146) 00:17:24.899 fused_ordering(147) 00:17:24.899 fused_ordering(148) 00:17:24.899 fused_ordering(149) 00:17:24.899 fused_ordering(150) 00:17:24.899 fused_ordering(151) 00:17:24.899 fused_ordering(152) 00:17:24.899 fused_ordering(153) 00:17:24.899 fused_ordering(154) 00:17:24.899 fused_ordering(155) 00:17:24.899 fused_ordering(156) 00:17:24.899 fused_ordering(157) 00:17:24.899 fused_ordering(158) 00:17:24.899 fused_ordering(159) 00:17:24.899 fused_ordering(160) 00:17:24.899 fused_ordering(161) 00:17:24.899 fused_ordering(162) 00:17:24.899 fused_ordering(163) 00:17:24.899 fused_ordering(164) 00:17:24.899 fused_ordering(165) 00:17:24.899 fused_ordering(166) 00:17:24.899 fused_ordering(167) 00:17:24.899 fused_ordering(168) 00:17:24.899 fused_ordering(169) 00:17:24.899 fused_ordering(170) 00:17:24.899 fused_ordering(171) 00:17:24.899 fused_ordering(172) 00:17:24.899 fused_ordering(173) 00:17:24.899 fused_ordering(174) 00:17:24.899 fused_ordering(175) 00:17:24.899 fused_ordering(176) 00:17:24.899 fused_ordering(177) 00:17:24.900 fused_ordering(178) 00:17:24.900 fused_ordering(179) 00:17:24.900 fused_ordering(180) 00:17:24.900 fused_ordering(181) 00:17:24.900 fused_ordering(182) 00:17:24.900 fused_ordering(183) 00:17:24.900 fused_ordering(184) 00:17:24.900 fused_ordering(185) 00:17:24.900 fused_ordering(186) 00:17:24.900 fused_ordering(187) 00:17:24.900 fused_ordering(188) 00:17:24.900 fused_ordering(189) 00:17:24.900 fused_ordering(190) 00:17:24.900 fused_ordering(191) 00:17:24.900 fused_ordering(192) 00:17:24.900 fused_ordering(193) 00:17:24.900 fused_ordering(194) 00:17:24.900 fused_ordering(195) 00:17:24.900 fused_ordering(196) 00:17:24.900 fused_ordering(197) 00:17:24.900 fused_ordering(198) 00:17:24.900 fused_ordering(199) 00:17:24.900 fused_ordering(200) 00:17:24.900 fused_ordering(201) 00:17:24.900 fused_ordering(202) 00:17:24.900 fused_ordering(203) 00:17:24.900 fused_ordering(204) 00:17:24.900 fused_ordering(205) 00:17:25.466 fused_ordering(206) 00:17:25.466 fused_ordering(207) 00:17:25.466 fused_ordering(208) 00:17:25.466 fused_ordering(209) 00:17:25.467 fused_ordering(210) 00:17:25.467 fused_ordering(211) 00:17:25.467 fused_ordering(212) 00:17:25.467 fused_ordering(213) 00:17:25.467 fused_ordering(214) 00:17:25.467 fused_ordering(215) 00:17:25.467 fused_ordering(216) 00:17:25.467 fused_ordering(217) 00:17:25.467 fused_ordering(218) 00:17:25.467 fused_ordering(219) 00:17:25.467 fused_ordering(220) 00:17:25.467 fused_ordering(221) 00:17:25.467 fused_ordering(222) 00:17:25.467 fused_ordering(223) 00:17:25.467 fused_ordering(224) 00:17:25.467 fused_ordering(225) 00:17:25.467 fused_ordering(226) 00:17:25.467 fused_ordering(227) 00:17:25.467 fused_ordering(228) 00:17:25.467 fused_ordering(229) 00:17:25.467 fused_ordering(230) 00:17:25.467 fused_ordering(231) 00:17:25.467 fused_ordering(232) 00:17:25.467 fused_ordering(233) 00:17:25.467 fused_ordering(234) 00:17:25.467 fused_ordering(235) 00:17:25.467 fused_ordering(236) 00:17:25.467 fused_ordering(237) 00:17:25.467 fused_ordering(238) 00:17:25.467 fused_ordering(239) 00:17:25.467 fused_ordering(240) 00:17:25.467 fused_ordering(241) 00:17:25.467 fused_ordering(242) 00:17:25.467 fused_ordering(243) 00:17:25.467 fused_ordering(244) 00:17:25.467 fused_ordering(245) 00:17:25.467 fused_ordering(246) 00:17:25.467 fused_ordering(247) 00:17:25.467 fused_ordering(248) 00:17:25.467 fused_ordering(249) 00:17:25.467 fused_ordering(250) 00:17:25.467 fused_ordering(251) 00:17:25.467 fused_ordering(252) 00:17:25.467 fused_ordering(253) 00:17:25.467 fused_ordering(254) 00:17:25.467 fused_ordering(255) 00:17:25.467 fused_ordering(256) 00:17:25.467 fused_ordering(257) 00:17:25.467 fused_ordering(258) 00:17:25.467 fused_ordering(259) 00:17:25.467 fused_ordering(260) 00:17:25.467 fused_ordering(261) 00:17:25.467 fused_ordering(262) 00:17:25.467 fused_ordering(263) 00:17:25.467 fused_ordering(264) 00:17:25.467 fused_ordering(265) 00:17:25.467 fused_ordering(266) 00:17:25.467 fused_ordering(267) 00:17:25.467 fused_ordering(268) 00:17:25.467 fused_ordering(269) 00:17:25.467 fused_ordering(270) 00:17:25.467 fused_ordering(271) 00:17:25.467 fused_ordering(272) 00:17:25.467 fused_ordering(273) 00:17:25.467 fused_ordering(274) 00:17:25.467 fused_ordering(275) 00:17:25.467 fused_ordering(276) 00:17:25.467 fused_ordering(277) 00:17:25.467 fused_ordering(278) 00:17:25.467 fused_ordering(279) 00:17:25.467 fused_ordering(280) 00:17:25.467 fused_ordering(281) 00:17:25.467 fused_ordering(282) 00:17:25.467 fused_ordering(283) 00:17:25.467 fused_ordering(284) 00:17:25.467 fused_ordering(285) 00:17:25.467 fused_ordering(286) 00:17:25.467 fused_ordering(287) 00:17:25.467 fused_ordering(288) 00:17:25.467 fused_ordering(289) 00:17:25.467 fused_ordering(290) 00:17:25.467 fused_ordering(291) 00:17:25.467 fused_ordering(292) 00:17:25.467 fused_ordering(293) 00:17:25.467 fused_ordering(294) 00:17:25.467 fused_ordering(295) 00:17:25.467 fused_ordering(296) 00:17:25.467 fused_ordering(297) 00:17:25.467 fused_ordering(298) 00:17:25.467 fused_ordering(299) 00:17:25.467 fused_ordering(300) 00:17:25.467 fused_ordering(301) 00:17:25.467 fused_ordering(302) 00:17:25.467 fused_ordering(303) 00:17:25.467 fused_ordering(304) 00:17:25.467 fused_ordering(305) 00:17:25.467 fused_ordering(306) 00:17:25.467 fused_ordering(307) 00:17:25.467 fused_ordering(308) 00:17:25.467 fused_ordering(309) 00:17:25.467 fused_ordering(310) 00:17:25.467 fused_ordering(311) 00:17:25.467 fused_ordering(312) 00:17:25.467 fused_ordering(313) 00:17:25.467 fused_ordering(314) 00:17:25.467 fused_ordering(315) 00:17:25.467 fused_ordering(316) 00:17:25.467 fused_ordering(317) 00:17:25.467 fused_ordering(318) 00:17:25.467 fused_ordering(319) 00:17:25.467 fused_ordering(320) 00:17:25.467 fused_ordering(321) 00:17:25.467 fused_ordering(322) 00:17:25.467 fused_ordering(323) 00:17:25.467 fused_ordering(324) 00:17:25.467 fused_ordering(325) 00:17:25.467 fused_ordering(326) 00:17:25.467 fused_ordering(327) 00:17:25.467 fused_ordering(328) 00:17:25.467 fused_ordering(329) 00:17:25.467 fused_ordering(330) 00:17:25.467 fused_ordering(331) 00:17:25.467 fused_ordering(332) 00:17:25.467 fused_ordering(333) 00:17:25.467 fused_ordering(334) 00:17:25.467 fused_ordering(335) 00:17:25.467 fused_ordering(336) 00:17:25.467 fused_ordering(337) 00:17:25.467 fused_ordering(338) 00:17:25.467 fused_ordering(339) 00:17:25.467 fused_ordering(340) 00:17:25.467 fused_ordering(341) 00:17:25.467 fused_ordering(342) 00:17:25.467 fused_ordering(343) 00:17:25.467 fused_ordering(344) 00:17:25.467 fused_ordering(345) 00:17:25.467 fused_ordering(346) 00:17:25.467 fused_ordering(347) 00:17:25.467 fused_ordering(348) 00:17:25.467 fused_ordering(349) 00:17:25.467 fused_ordering(350) 00:17:25.467 fused_ordering(351) 00:17:25.467 fused_ordering(352) 00:17:25.467 fused_ordering(353) 00:17:25.467 fused_ordering(354) 00:17:25.467 fused_ordering(355) 00:17:25.467 fused_ordering(356) 00:17:25.467 fused_ordering(357) 00:17:25.467 fused_ordering(358) 00:17:25.467 fused_ordering(359) 00:17:25.467 fused_ordering(360) 00:17:25.467 fused_ordering(361) 00:17:25.467 fused_ordering(362) 00:17:25.467 fused_ordering(363) 00:17:25.467 fused_ordering(364) 00:17:25.467 fused_ordering(365) 00:17:25.467 fused_ordering(366) 00:17:25.467 fused_ordering(367) 00:17:25.467 fused_ordering(368) 00:17:25.468 fused_ordering(369) 00:17:25.468 fused_ordering(370) 00:17:25.468 fused_ordering(371) 00:17:25.468 fused_ordering(372) 00:17:25.468 fused_ordering(373) 00:17:25.468 fused_ordering(374) 00:17:25.468 fused_ordering(375) 00:17:25.468 fused_ordering(376) 00:17:25.468 fused_ordering(377) 00:17:25.468 fused_ordering(378) 00:17:25.468 fused_ordering(379) 00:17:25.468 fused_ordering(380) 00:17:25.468 fused_ordering(381) 00:17:25.468 fused_ordering(382) 00:17:25.468 fused_ordering(383) 00:17:25.468 fused_ordering(384) 00:17:25.468 fused_ordering(385) 00:17:25.468 fused_ordering(386) 00:17:25.468 fused_ordering(387) 00:17:25.468 fused_ordering(388) 00:17:25.468 fused_ordering(389) 00:17:25.468 fused_ordering(390) 00:17:25.468 fused_ordering(391) 00:17:25.468 fused_ordering(392) 00:17:25.468 fused_ordering(393) 00:17:25.468 fused_ordering(394) 00:17:25.468 fused_ordering(395) 00:17:25.468 fused_ordering(396) 00:17:25.468 fused_ordering(397) 00:17:25.468 fused_ordering(398) 00:17:25.468 fused_ordering(399) 00:17:25.468 fused_ordering(400) 00:17:25.468 fused_ordering(401) 00:17:25.468 fused_ordering(402) 00:17:25.468 fused_ordering(403) 00:17:25.468 fused_ordering(404) 00:17:25.468 fused_ordering(405) 00:17:25.468 fused_ordering(406) 00:17:25.468 fused_ordering(407) 00:17:25.468 fused_ordering(408) 00:17:25.468 fused_ordering(409) 00:17:25.468 fused_ordering(410) 00:17:25.726 fused_ordering(411) 00:17:25.726 fused_ordering(412) 00:17:25.726 fused_ordering(413) 00:17:25.726 fused_ordering(414) 00:17:25.726 fused_ordering(415) 00:17:25.726 fused_ordering(416) 00:17:25.726 fused_ordering(417) 00:17:25.726 fused_ordering(418) 00:17:25.726 fused_ordering(419) 00:17:25.726 fused_ordering(420) 00:17:25.726 fused_ordering(421) 00:17:25.726 fused_ordering(422) 00:17:25.726 fused_ordering(423) 00:17:25.726 fused_ordering(424) 00:17:25.726 fused_ordering(425) 00:17:25.726 fused_ordering(426) 00:17:25.726 fused_ordering(427) 00:17:25.726 fused_ordering(428) 00:17:25.726 fused_ordering(429) 00:17:25.726 fused_ordering(430) 00:17:25.726 fused_ordering(431) 00:17:25.726 fused_ordering(432) 00:17:25.726 fused_ordering(433) 00:17:25.726 fused_ordering(434) 00:17:25.726 fused_ordering(435) 00:17:25.726 fused_ordering(436) 00:17:25.726 fused_ordering(437) 00:17:25.726 fused_ordering(438) 00:17:25.726 fused_ordering(439) 00:17:25.726 fused_ordering(440) 00:17:25.726 fused_ordering(441) 00:17:25.726 fused_ordering(442) 00:17:25.726 fused_ordering(443) 00:17:25.726 fused_ordering(444) 00:17:25.726 fused_ordering(445) 00:17:25.726 fused_ordering(446) 00:17:25.726 fused_ordering(447) 00:17:25.726 fused_ordering(448) 00:17:25.726 fused_ordering(449) 00:17:25.726 fused_ordering(450) 00:17:25.726 fused_ordering(451) 00:17:25.726 fused_ordering(452) 00:17:25.726 fused_ordering(453) 00:17:25.726 fused_ordering(454) 00:17:25.726 fused_ordering(455) 00:17:25.726 fused_ordering(456) 00:17:25.726 fused_ordering(457) 00:17:25.726 fused_ordering(458) 00:17:25.726 fused_ordering(459) 00:17:25.726 fused_ordering(460) 00:17:25.726 fused_ordering(461) 00:17:25.726 fused_ordering(462) 00:17:25.726 fused_ordering(463) 00:17:25.726 fused_ordering(464) 00:17:25.726 fused_ordering(465) 00:17:25.726 fused_ordering(466) 00:17:25.726 fused_ordering(467) 00:17:25.726 fused_ordering(468) 00:17:25.726 fused_ordering(469) 00:17:25.726 fused_ordering(470) 00:17:25.726 fused_ordering(471) 00:17:25.726 fused_ordering(472) 00:17:25.726 fused_ordering(473) 00:17:25.726 fused_ordering(474) 00:17:25.726 fused_ordering(475) 00:17:25.726 fused_ordering(476) 00:17:25.726 fused_ordering(477) 00:17:25.726 fused_ordering(478) 00:17:25.726 fused_ordering(479) 00:17:25.726 fused_ordering(480) 00:17:25.726 fused_ordering(481) 00:17:25.726 fused_ordering(482) 00:17:25.726 fused_ordering(483) 00:17:25.726 fused_ordering(484) 00:17:25.726 fused_ordering(485) 00:17:25.726 fused_ordering(486) 00:17:25.726 fused_ordering(487) 00:17:25.726 fused_ordering(488) 00:17:25.726 fused_ordering(489) 00:17:25.726 fused_ordering(490) 00:17:25.726 fused_ordering(491) 00:17:25.726 fused_ordering(492) 00:17:25.726 fused_ordering(493) 00:17:25.726 fused_ordering(494) 00:17:25.726 fused_ordering(495) 00:17:25.726 fused_ordering(496) 00:17:25.726 fused_ordering(497) 00:17:25.726 fused_ordering(498) 00:17:25.727 fused_ordering(499) 00:17:25.727 fused_ordering(500) 00:17:25.727 fused_ordering(501) 00:17:25.727 fused_ordering(502) 00:17:25.727 fused_ordering(503) 00:17:25.727 fused_ordering(504) 00:17:25.727 fused_ordering(505) 00:17:25.727 fused_ordering(506) 00:17:25.727 fused_ordering(507) 00:17:25.727 fused_ordering(508) 00:17:25.727 fused_ordering(509) 00:17:25.727 fused_ordering(510) 00:17:25.727 fused_ordering(511) 00:17:25.727 fused_ordering(512) 00:17:25.727 fused_ordering(513) 00:17:25.727 fused_ordering(514) 00:17:25.727 fused_ordering(515) 00:17:25.727 fused_ordering(516) 00:17:25.727 fused_ordering(517) 00:17:25.727 fused_ordering(518) 00:17:25.727 fused_ordering(519) 00:17:25.727 fused_ordering(520) 00:17:25.727 fused_ordering(521) 00:17:25.727 fused_ordering(522) 00:17:25.727 fused_ordering(523) 00:17:25.727 fused_ordering(524) 00:17:25.727 fused_ordering(525) 00:17:25.727 fused_ordering(526) 00:17:25.727 fused_ordering(527) 00:17:25.727 fused_ordering(528) 00:17:25.727 fused_ordering(529) 00:17:25.727 fused_ordering(530) 00:17:25.727 fused_ordering(531) 00:17:25.727 fused_ordering(532) 00:17:25.727 fused_ordering(533) 00:17:25.727 fused_ordering(534) 00:17:25.727 fused_ordering(535) 00:17:25.727 fused_ordering(536) 00:17:25.727 fused_ordering(537) 00:17:25.727 fused_ordering(538) 00:17:25.727 fused_ordering(539) 00:17:25.727 fused_ordering(540) 00:17:25.727 fused_ordering(541) 00:17:25.727 fused_ordering(542) 00:17:25.727 fused_ordering(543) 00:17:25.727 fused_ordering(544) 00:17:25.727 fused_ordering(545) 00:17:25.727 fused_ordering(546) 00:17:25.727 fused_ordering(547) 00:17:25.727 fused_ordering(548) 00:17:25.727 fused_ordering(549) 00:17:25.727 fused_ordering(550) 00:17:25.727 fused_ordering(551) 00:17:25.727 fused_ordering(552) 00:17:25.727 fused_ordering(553) 00:17:25.727 fused_ordering(554) 00:17:25.727 fused_ordering(555) 00:17:25.727 fused_ordering(556) 00:17:25.727 fused_ordering(557) 00:17:25.727 fused_ordering(558) 00:17:25.727 fused_ordering(559) 00:17:25.727 fused_ordering(560) 00:17:25.727 fused_ordering(561) 00:17:25.727 fused_ordering(562) 00:17:25.727 fused_ordering(563) 00:17:25.727 fused_ordering(564) 00:17:25.727 fused_ordering(565) 00:17:25.727 fused_ordering(566) 00:17:25.727 fused_ordering(567) 00:17:25.727 fused_ordering(568) 00:17:25.727 fused_ordering(569) 00:17:25.727 fused_ordering(570) 00:17:25.727 fused_ordering(571) 00:17:25.727 fused_ordering(572) 00:17:25.727 fused_ordering(573) 00:17:25.727 fused_ordering(574) 00:17:25.727 fused_ordering(575) 00:17:25.727 fused_ordering(576) 00:17:25.727 fused_ordering(577) 00:17:25.727 fused_ordering(578) 00:17:25.727 fused_ordering(579) 00:17:25.727 fused_ordering(580) 00:17:25.727 fused_ordering(581) 00:17:25.727 fused_ordering(582) 00:17:25.727 fused_ordering(583) 00:17:25.727 fused_ordering(584) 00:17:25.727 fused_ordering(585) 00:17:25.727 fused_ordering(586) 00:17:25.727 fused_ordering(587) 00:17:25.727 fused_ordering(588) 00:17:25.727 fused_ordering(589) 00:17:25.727 fused_ordering(590) 00:17:25.727 fused_ordering(591) 00:17:25.727 fused_ordering(592) 00:17:25.727 fused_ordering(593) 00:17:25.727 fused_ordering(594) 00:17:25.727 fused_ordering(595) 00:17:25.727 fused_ordering(596) 00:17:25.727 fused_ordering(597) 00:17:25.727 fused_ordering(598) 00:17:25.727 fused_ordering(599) 00:17:25.727 fused_ordering(600) 00:17:25.727 fused_ordering(601) 00:17:25.727 fused_ordering(602) 00:17:25.727 fused_ordering(603) 00:17:25.727 fused_ordering(604) 00:17:25.727 fused_ordering(605) 00:17:25.727 fused_ordering(606) 00:17:25.727 fused_ordering(607) 00:17:25.727 fused_ordering(608) 00:17:25.727 fused_ordering(609) 00:17:25.727 fused_ordering(610) 00:17:25.727 fused_ordering(611) 00:17:25.727 fused_ordering(612) 00:17:25.727 fused_ordering(613) 00:17:25.727 fused_ordering(614) 00:17:25.727 fused_ordering(615) 00:17:26.293 fused_ordering(616) 00:17:26.293 fused_ordering(617) 00:17:26.293 fused_ordering(618) 00:17:26.293 fused_ordering(619) 00:17:26.293 fused_ordering(620) 00:17:26.293 fused_ordering(621) 00:17:26.293 fused_ordering(622) 00:17:26.293 fused_ordering(623) 00:17:26.293 fused_ordering(624) 00:17:26.293 fused_ordering(625) 00:17:26.293 fused_ordering(626) 00:17:26.293 fused_ordering(627) 00:17:26.293 fused_ordering(628) 00:17:26.293 fused_ordering(629) 00:17:26.293 fused_ordering(630) 00:17:26.293 fused_ordering(631) 00:17:26.293 fused_ordering(632) 00:17:26.293 fused_ordering(633) 00:17:26.293 fused_ordering(634) 00:17:26.293 fused_ordering(635) 00:17:26.293 fused_ordering(636) 00:17:26.293 fused_ordering(637) 00:17:26.293 fused_ordering(638) 00:17:26.293 fused_ordering(639) 00:17:26.293 fused_ordering(640) 00:17:26.293 fused_ordering(641) 00:17:26.293 fused_ordering(642) 00:17:26.293 fused_ordering(643) 00:17:26.293 fused_ordering(644) 00:17:26.293 fused_ordering(645) 00:17:26.293 fused_ordering(646) 00:17:26.293 fused_ordering(647) 00:17:26.293 fused_ordering(648) 00:17:26.293 fused_ordering(649) 00:17:26.293 fused_ordering(650) 00:17:26.293 fused_ordering(651) 00:17:26.293 fused_ordering(652) 00:17:26.293 fused_ordering(653) 00:17:26.293 fused_ordering(654) 00:17:26.293 fused_ordering(655) 00:17:26.293 fused_ordering(656) 00:17:26.293 fused_ordering(657) 00:17:26.293 fused_ordering(658) 00:17:26.293 fused_ordering(659) 00:17:26.293 fused_ordering(660) 00:17:26.293 fused_ordering(661) 00:17:26.293 fused_ordering(662) 00:17:26.293 fused_ordering(663) 00:17:26.293 fused_ordering(664) 00:17:26.293 fused_ordering(665) 00:17:26.293 fused_ordering(666) 00:17:26.293 fused_ordering(667) 00:17:26.293 fused_ordering(668) 00:17:26.293 fused_ordering(669) 00:17:26.293 fused_ordering(670) 00:17:26.293 fused_ordering(671) 00:17:26.293 fused_ordering(672) 00:17:26.293 fused_ordering(673) 00:17:26.293 fused_ordering(674) 00:17:26.293 fused_ordering(675) 00:17:26.294 fused_ordering(676) 00:17:26.294 fused_ordering(677) 00:17:26.294 fused_ordering(678) 00:17:26.294 fused_ordering(679) 00:17:26.294 fused_ordering(680) 00:17:26.294 fused_ordering(681) 00:17:26.294 fused_ordering(682) 00:17:26.294 fused_ordering(683) 00:17:26.294 fused_ordering(684) 00:17:26.294 fused_ordering(685) 00:17:26.294 fused_ordering(686) 00:17:26.294 fused_ordering(687) 00:17:26.294 fused_ordering(688) 00:17:26.294 fused_ordering(689) 00:17:26.294 fused_ordering(690) 00:17:26.294 fused_ordering(691) 00:17:26.294 fused_ordering(692) 00:17:26.294 fused_ordering(693) 00:17:26.294 fused_ordering(694) 00:17:26.294 fused_ordering(695) 00:17:26.294 fused_ordering(696) 00:17:26.294 fused_ordering(697) 00:17:26.294 fused_ordering(698) 00:17:26.294 fused_ordering(699) 00:17:26.294 fused_ordering(700) 00:17:26.294 fused_ordering(701) 00:17:26.294 fused_ordering(702) 00:17:26.294 fused_ordering(703) 00:17:26.294 fused_ordering(704) 00:17:26.294 fused_ordering(705) 00:17:26.294 fused_ordering(706) 00:17:26.294 fused_ordering(707) 00:17:26.294 fused_ordering(708) 00:17:26.294 fused_ordering(709) 00:17:26.294 fused_ordering(710) 00:17:26.294 fused_ordering(711) 00:17:26.294 fused_ordering(712) 00:17:26.294 fused_ordering(713) 00:17:26.294 fused_ordering(714) 00:17:26.294 fused_ordering(715) 00:17:26.294 fused_ordering(716) 00:17:26.294 fused_ordering(717) 00:17:26.294 fused_ordering(718) 00:17:26.294 fused_ordering(719) 00:17:26.294 fused_ordering(720) 00:17:26.294 fused_ordering(721) 00:17:26.294 fused_ordering(722) 00:17:26.294 fused_ordering(723) 00:17:26.294 fused_ordering(724) 00:17:26.294 fused_ordering(725) 00:17:26.294 fused_ordering(726) 00:17:26.294 fused_ordering(727) 00:17:26.294 fused_ordering(728) 00:17:26.294 fused_ordering(729) 00:17:26.294 fused_ordering(730) 00:17:26.294 fused_ordering(731) 00:17:26.294 fused_ordering(732) 00:17:26.294 fused_ordering(733) 00:17:26.294 fused_ordering(734) 00:17:26.294 fused_ordering(735) 00:17:26.294 fused_ordering(736) 00:17:26.294 fused_ordering(737) 00:17:26.294 fused_ordering(738) 00:17:26.294 fused_ordering(739) 00:17:26.294 fused_ordering(740) 00:17:26.294 fused_ordering(741) 00:17:26.294 fused_ordering(742) 00:17:26.294 fused_ordering(743) 00:17:26.294 fused_ordering(744) 00:17:26.294 fused_ordering(745) 00:17:26.294 fused_ordering(746) 00:17:26.294 fused_ordering(747) 00:17:26.294 fused_ordering(748) 00:17:26.294 fused_ordering(749) 00:17:26.294 fused_ordering(750) 00:17:26.294 fused_ordering(751) 00:17:26.294 fused_ordering(752) 00:17:26.294 fused_ordering(753) 00:17:26.294 fused_ordering(754) 00:17:26.294 fused_ordering(755) 00:17:26.294 fused_ordering(756) 00:17:26.294 fused_ordering(757) 00:17:26.294 fused_ordering(758) 00:17:26.294 fused_ordering(759) 00:17:26.294 fused_ordering(760) 00:17:26.294 fused_ordering(761) 00:17:26.294 fused_ordering(762) 00:17:26.294 fused_ordering(763) 00:17:26.294 fused_ordering(764) 00:17:26.294 fused_ordering(765) 00:17:26.294 fused_ordering(766) 00:17:26.294 fused_ordering(767) 00:17:26.294 fused_ordering(768) 00:17:26.294 fused_ordering(769) 00:17:26.294 fused_ordering(770) 00:17:26.294 fused_ordering(771) 00:17:26.294 fused_ordering(772) 00:17:26.294 fused_ordering(773) 00:17:26.294 fused_ordering(774) 00:17:26.294 fused_ordering(775) 00:17:26.294 fused_ordering(776) 00:17:26.294 fused_ordering(777) 00:17:26.294 fused_ordering(778) 00:17:26.294 fused_ordering(779) 00:17:26.294 fused_ordering(780) 00:17:26.294 fused_ordering(781) 00:17:26.294 fused_ordering(782) 00:17:26.294 fused_ordering(783) 00:17:26.294 fused_ordering(784) 00:17:26.294 fused_ordering(785) 00:17:26.294 fused_ordering(786) 00:17:26.294 fused_ordering(787) 00:17:26.294 fused_ordering(788) 00:17:26.294 fused_ordering(789) 00:17:26.294 fused_ordering(790) 00:17:26.294 fused_ordering(791) 00:17:26.294 fused_ordering(792) 00:17:26.294 fused_ordering(793) 00:17:26.294 fused_ordering(794) 00:17:26.294 fused_ordering(795) 00:17:26.294 fused_ordering(796) 00:17:26.294 fused_ordering(797) 00:17:26.294 fused_ordering(798) 00:17:26.294 fused_ordering(799) 00:17:26.294 fused_ordering(800) 00:17:26.294 fused_ordering(801) 00:17:26.294 fused_ordering(802) 00:17:26.294 fused_ordering(803) 00:17:26.294 fused_ordering(804) 00:17:26.294 fused_ordering(805) 00:17:26.294 fused_ordering(806) 00:17:26.294 fused_ordering(807) 00:17:26.294 fused_ordering(808) 00:17:26.294 fused_ordering(809) 00:17:26.294 fused_ordering(810) 00:17:26.294 fused_ordering(811) 00:17:26.294 fused_ordering(812) 00:17:26.294 fused_ordering(813) 00:17:26.294 fused_ordering(814) 00:17:26.294 fused_ordering(815) 00:17:26.294 fused_ordering(816) 00:17:26.294 fused_ordering(817) 00:17:26.294 fused_ordering(818) 00:17:26.294 fused_ordering(819) 00:17:26.294 fused_ordering(820) 00:17:26.859 fused_ordering(821) 00:17:26.859 fused_ordering(822) 00:17:26.859 fused_ordering(823) 00:17:26.859 fused_ordering(824) 00:17:26.859 fused_ordering(825) 00:17:26.859 fused_ordering(826) 00:17:26.859 fused_ordering(827) 00:17:26.859 fused_ordering(828) 00:17:26.859 fused_ordering(829) 00:17:26.859 fused_ordering(830) 00:17:26.859 fused_ordering(831) 00:17:26.859 fused_ordering(832) 00:17:26.859 fused_ordering(833) 00:17:26.859 fused_ordering(834) 00:17:26.859 fused_ordering(835) 00:17:26.859 fused_ordering(836) 00:17:26.859 fused_ordering(837) 00:17:26.859 fused_ordering(838) 00:17:26.859 fused_ordering(839) 00:17:26.859 fused_ordering(840) 00:17:26.859 fused_ordering(841) 00:17:26.859 fused_ordering(842) 00:17:26.859 fused_ordering(843) 00:17:26.859 fused_ordering(844) 00:17:26.859 fused_ordering(845) 00:17:26.859 fused_ordering(846) 00:17:26.859 fused_ordering(847) 00:17:26.859 fused_ordering(848) 00:17:26.859 fused_ordering(849) 00:17:26.859 fused_ordering(850) 00:17:26.859 fused_ordering(851) 00:17:26.859 fused_ordering(852) 00:17:26.859 fused_ordering(853) 00:17:26.859 fused_ordering(854) 00:17:26.859 fused_ordering(855) 00:17:26.859 fused_ordering(856) 00:17:26.859 fused_ordering(857) 00:17:26.859 fused_ordering(858) 00:17:26.859 fused_ordering(859) 00:17:26.859 fused_ordering(860) 00:17:26.859 fused_ordering(861) 00:17:26.859 fused_ordering(862) 00:17:26.859 fused_ordering(863) 00:17:26.859 fused_ordering(864) 00:17:26.859 fused_ordering(865) 00:17:26.859 fused_ordering(866) 00:17:26.859 fused_ordering(867) 00:17:26.859 fused_ordering(868) 00:17:26.859 fused_ordering(869) 00:17:26.859 fused_ordering(870) 00:17:26.859 fused_ordering(871) 00:17:26.859 fused_ordering(872) 00:17:26.859 fused_ordering(873) 00:17:26.859 fused_ordering(874) 00:17:26.859 fused_ordering(875) 00:17:26.859 fused_ordering(876) 00:17:26.859 fused_ordering(877) 00:17:26.859 fused_ordering(878) 00:17:26.859 fused_ordering(879) 00:17:26.859 fused_ordering(880) 00:17:26.859 fused_ordering(881) 00:17:26.859 fused_ordering(882) 00:17:26.859 fused_ordering(883) 00:17:26.859 fused_ordering(884) 00:17:26.859 fused_ordering(885) 00:17:26.859 fused_ordering(886) 00:17:26.859 fused_ordering(887) 00:17:26.859 fused_ordering(888) 00:17:26.859 fused_ordering(889) 00:17:26.859 fused_ordering(890) 00:17:26.859 fused_ordering(891) 00:17:26.859 fused_ordering(892) 00:17:26.859 fused_ordering(893) 00:17:26.859 fused_ordering(894) 00:17:26.859 fused_ordering(895) 00:17:26.859 fused_ordering(896) 00:17:26.859 fused_ordering(897) 00:17:26.859 fused_ordering(898) 00:17:26.859 fused_ordering(899) 00:17:26.859 fused_ordering(900) 00:17:26.859 fused_ordering(901) 00:17:26.859 fused_ordering(902) 00:17:26.859 fused_ordering(903) 00:17:26.859 fused_ordering(904) 00:17:26.859 fused_ordering(905) 00:17:26.859 fused_ordering(906) 00:17:26.859 fused_ordering(907) 00:17:26.859 fused_ordering(908) 00:17:26.859 fused_ordering(909) 00:17:26.859 fused_ordering(910) 00:17:26.859 fused_ordering(911) 00:17:26.859 fused_ordering(912) 00:17:26.859 fused_ordering(913) 00:17:26.859 fused_ordering(914) 00:17:26.859 fused_ordering(915) 00:17:26.859 fused_ordering(916) 00:17:26.859 fused_ordering(917) 00:17:26.859 fused_ordering(918) 00:17:26.859 fused_ordering(919) 00:17:26.859 fused_ordering(920) 00:17:26.859 fused_ordering(921) 00:17:26.859 fused_ordering(922) 00:17:26.859 fused_ordering(923) 00:17:26.859 fused_ordering(924) 00:17:26.859 fused_ordering(925) 00:17:26.859 fused_ordering(926) 00:17:26.859 fused_ordering(927) 00:17:26.859 fused_ordering(928) 00:17:26.859 fused_ordering(929) 00:17:26.859 fused_ordering(930) 00:17:26.859 fused_ordering(931) 00:17:26.859 fused_ordering(932) 00:17:26.859 fused_ordering(933) 00:17:26.860 fused_ordering(934) 00:17:26.860 fused_ordering(935) 00:17:26.860 fused_ordering(936) 00:17:26.860 fused_ordering(937) 00:17:26.860 fused_ordering(938) 00:17:26.860 fused_ordering(939) 00:17:26.860 fused_ordering(940) 00:17:26.860 fused_ordering(941) 00:17:26.860 fused_ordering(942) 00:17:26.860 fused_ordering(943) 00:17:26.860 fused_ordering(944) 00:17:26.860 fused_ordering(945) 00:17:26.860 fused_ordering(946) 00:17:26.860 fused_ordering(947) 00:17:26.860 fused_ordering(948) 00:17:26.860 fused_ordering(949) 00:17:26.860 fused_ordering(950) 00:17:26.860 fused_ordering(951) 00:17:26.860 fused_ordering(952) 00:17:26.860 fused_ordering(953) 00:17:26.860 fused_ordering(954) 00:17:26.860 fused_ordering(955) 00:17:26.860 fused_ordering(956) 00:17:26.860 fused_ordering(957) 00:17:26.860 fused_ordering(958) 00:17:26.860 fused_ordering(959) 00:17:26.860 fused_ordering(960) 00:17:26.860 fused_ordering(961) 00:17:26.860 fused_ordering(962) 00:17:26.860 fused_ordering(963) 00:17:26.860 fused_ordering(964) 00:17:26.860 fused_ordering(965) 00:17:26.860 fused_ordering(966) 00:17:26.860 fused_ordering(967) 00:17:26.860 fused_ordering(968) 00:17:26.860 fused_ordering(969) 00:17:26.860 fused_ordering(970) 00:17:26.860 fused_ordering(971) 00:17:26.860 fused_ordering(972) 00:17:26.860 fused_ordering(973) 00:17:26.860 fused_ordering(974) 00:17:26.860 fused_ordering(975) 00:17:26.860 fused_ordering(976) 00:17:26.860 fused_ordering(977) 00:17:26.860 fused_ordering(978) 00:17:26.860 fused_ordering(979) 00:17:26.860 fused_ordering(980) 00:17:26.860 fused_ordering(981) 00:17:26.860 fused_ordering(982) 00:17:26.860 fused_ordering(983) 00:17:26.860 fused_ordering(984) 00:17:26.860 fused_ordering(985) 00:17:26.860 fused_ordering(986) 00:17:26.860 fused_ordering(987) 00:17:26.860 fused_ordering(988) 00:17:26.860 fused_ordering(989) 00:17:26.860 fused_ordering(990) 00:17:26.860 fused_ordering(991) 00:17:26.860 fused_ordering(992) 00:17:26.860 fused_ordering(993) 00:17:26.860 fused_ordering(994) 00:17:26.860 fused_ordering(995) 00:17:26.860 fused_ordering(996) 00:17:26.860 fused_ordering(997) 00:17:26.860 fused_ordering(998) 00:17:26.860 fused_ordering(999) 00:17:26.860 fused_ordering(1000) 00:17:26.860 fused_ordering(1001) 00:17:26.860 fused_ordering(1002) 00:17:26.860 fused_ordering(1003) 00:17:26.860 fused_ordering(1004) 00:17:26.860 fused_ordering(1005) 00:17:26.860 fused_ordering(1006) 00:17:26.860 fused_ordering(1007) 00:17:26.860 fused_ordering(1008) 00:17:26.860 fused_ordering(1009) 00:17:26.860 fused_ordering(1010) 00:17:26.860 fused_ordering(1011) 00:17:26.860 fused_ordering(1012) 00:17:26.860 fused_ordering(1013) 00:17:26.860 fused_ordering(1014) 00:17:26.860 fused_ordering(1015) 00:17:26.860 fused_ordering(1016) 00:17:26.860 fused_ordering(1017) 00:17:26.860 fused_ordering(1018) 00:17:26.860 fused_ordering(1019) 00:17:26.860 fused_ordering(1020) 00:17:26.860 fused_ordering(1021) 00:17:26.860 fused_ordering(1022) 00:17:26.860 fused_ordering(1023) 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.860 rmmod nvme_tcp 00:17:26.860 rmmod nvme_fabrics 00:17:26.860 rmmod nvme_keyring 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 576376 ']' 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 576376 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 576376 ']' 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 576376 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 576376 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 576376' 00:17:26.860 killing process with pid 576376 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 576376 00:17:26.860 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 576376 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.119 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:29.660 00:17:29.660 real 0m7.492s 00:17:29.660 user 0m5.097s 00:17:29.660 sys 0m3.062s 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:29.660 ************************************ 00:17:29.660 END TEST nvmf_fused_ordering 00:17:29.660 ************************************ 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.660 ************************************ 00:17:29.660 START TEST nvmf_ns_masking 00:17:29.660 ************************************ 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:29.660 * Looking for test storage... 00:17:29.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:29.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.660 --rc genhtml_branch_coverage=1 00:17:29.660 --rc genhtml_function_coverage=1 00:17:29.660 --rc genhtml_legend=1 00:17:29.660 --rc geninfo_all_blocks=1 00:17:29.660 --rc geninfo_unexecuted_blocks=1 00:17:29.660 00:17:29.660 ' 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:29.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.660 --rc genhtml_branch_coverage=1 00:17:29.660 --rc genhtml_function_coverage=1 00:17:29.660 --rc genhtml_legend=1 00:17:29.660 --rc geninfo_all_blocks=1 00:17:29.660 --rc geninfo_unexecuted_blocks=1 00:17:29.660 00:17:29.660 ' 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:29.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.660 --rc genhtml_branch_coverage=1 00:17:29.660 --rc genhtml_function_coverage=1 00:17:29.660 --rc genhtml_legend=1 00:17:29.660 --rc geninfo_all_blocks=1 00:17:29.660 --rc geninfo_unexecuted_blocks=1 00:17:29.660 00:17:29.660 ' 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:29.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.660 --rc genhtml_branch_coverage=1 00:17:29.660 --rc genhtml_function_coverage=1 00:17:29.660 --rc genhtml_legend=1 00:17:29.660 --rc geninfo_all_blocks=1 00:17:29.660 --rc geninfo_unexecuted_blocks=1 00:17:29.660 00:17:29.660 ' 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:29.660 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d5b8b9ae-676d-4426-8fc4-4d8fcefbd49c 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=30b8720f-4b86-4e7a-8dc7-ba3a2b661e54 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=7fc33973-9c5a-4f2f-bf34-23b6695ed882 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:29.661 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:31.565 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:31.565 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:31.565 Found net devices under 0000:09:00.0: cvl_0_0 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.565 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:31.566 Found net devices under 0000:09:00.1: cvl_0_1 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:31.566 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.824 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.824 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.824 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:31.824 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:31.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:17:31.824 00:17:31.824 --- 10.0.0.2 ping statistics --- 00:17:31.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.824 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:17:31.824 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:17:31.824 00:17:31.824 --- 10.0.0.1 ping statistics --- 00:17:31.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.824 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:17:31.824 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=578725 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 578725 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 578725 ']' 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.825 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:31.825 [2024-11-26 18:13:19.686221] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:17:31.825 [2024-11-26 18:13:19.686325] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.825 [2024-11-26 18:13:19.760570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.825 [2024-11-26 18:13:19.816654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.825 [2024-11-26 18:13:19.816702] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.825 [2024-11-26 18:13:19.816730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.825 [2024-11-26 18:13:19.816741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.825 [2024-11-26 18:13:19.816750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.825 [2024-11-26 18:13:19.817309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.084 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.084 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:32.084 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.084 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.084 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:32.084 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.084 18:13:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:32.342 [2024-11-26 18:13:20.220156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.342 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:32.342 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:32.342 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:32.601 Malloc1 00:17:32.601 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:32.860 Malloc2 00:17:32.860 18:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:33.425 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:33.683 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.941 [2024-11-26 18:13:21.707776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.941 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:33.941 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7fc33973-9c5a-4f2f-bf34-23b6695ed882 -a 10.0.0.2 -s 4420 -i 4 00:17:33.941 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:33.941 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:33.941 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:33.941 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:33.941 18:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:36.469 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.470 [ 0]:0x1 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af7a45d08639489fa65c2bbd07ca1de1 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af7a45d08639489fa65c2bbd07ca1de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.470 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.470 [ 0]:0x1 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af7a45d08639489fa65c2bbd07ca1de1 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af7a45d08639489fa65c2bbd07ca1de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:36.470 [ 1]:0x2 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73e58532c17e4394a41812dae020072e 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73e58532c17e4394a41812dae020072e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:36.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:36.470 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:36.728 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:36.986 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:36.986 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7fc33973-9c5a-4f2f-bf34-23b6695ed882 -a 10.0.0.2 -s 4420 -i 4 00:17:37.244 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:37.244 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:37.244 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.244 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:37.244 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:37.244 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:39.143 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:39.143 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:39.143 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.143 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:39.143 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.143 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:39.143 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:39.143 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:39.402 [ 0]:0x2 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73e58532c17e4394a41812dae020072e 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73e58532c17e4394a41812dae020072e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.402 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.662 [ 0]:0x1 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af7a45d08639489fa65c2bbd07ca1de1 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af7a45d08639489fa65c2bbd07ca1de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:39.662 [ 1]:0x2 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73e58532c17e4394a41812dae020072e 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73e58532c17e4394a41812dae020072e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.662 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.920 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:40.180 [ 0]:0x2 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:40.180 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.180 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73e58532c17e4394a41812dae020072e 00:17:40.180 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73e58532c17e4394a41812dae020072e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.180 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:40.180 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.181 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:40.439 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:40.439 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 7fc33973-9c5a-4f2f-bf34-23b6695ed882 -a 10.0.0.2 -s 4420 -i 4 00:17:40.697 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:40.697 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:40.697 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.697 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:40.697 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:40.697 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:42.597 [ 0]:0x1 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=af7a45d08639489fa65c2bbd07ca1de1 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ af7a45d08639489fa65c2bbd07ca1de1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:42.597 [ 1]:0x2 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:42.597 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:42.855 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73e58532c17e4394a41812dae020072e 00:17:42.855 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73e58532c17e4394a41812dae020072e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:42.855 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.114 [ 0]:0x2 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.114 18:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73e58532c17e4394a41812dae020072e 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73e58532c17e4394a41812dae020072e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:43.114 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:43.372 [2024-11-26 18:13:31.296414] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:43.372 request: 00:17:43.372 { 00:17:43.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.372 "nsid": 2, 00:17:43.372 "host": "nqn.2016-06.io.spdk:host1", 00:17:43.372 "method": "nvmf_ns_remove_host", 00:17:43.372 "req_id": 1 00:17:43.372 } 00:17:43.372 Got JSON-RPC error response 00:17:43.372 response: 00:17:43.372 { 00:17:43.372 "code": -32602, 00:17:43.372 "message": "Invalid parameters" 00:17:43.372 } 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.372 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.637 [ 0]:0x2 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73e58532c17e4394a41812dae020072e 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73e58532c17e4394a41812dae020072e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:43.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=580230 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 580230 /var/tmp/host.sock 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 580230 ']' 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:43.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.637 18:13:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:43.637 [2024-11-26 18:13:31.639063] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:17:43.637 [2024-11-26 18:13:31.639146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid580230 ] 00:17:43.965 [2024-11-26 18:13:31.706533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.965 [2024-11-26 18:13:31.763505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.223 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.223 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:44.223 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:44.481 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:44.739 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d5b8b9ae-676d-4426-8fc4-4d8fcefbd49c 00:17:44.739 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:44.739 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D5B8B9AE676D44268FC44D8FCEFBD49C -i 00:17:44.997 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 30b8720f-4b86-4e7a-8dc7-ba3a2b661e54 00:17:44.997 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:44.997 18:13:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 30B8720F4B864E7A8DC7BA3A2B661E54 -i 00:17:45.255 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:45.512 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:45.770 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:45.770 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:46.336 nvme0n1 00:17:46.336 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:46.336 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:46.594 nvme1n2 00:17:46.594 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:46.594 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:46.594 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:46.594 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:46.594 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:46.857 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:46.857 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:46.857 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:46.857 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:47.117 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d5b8b9ae-676d-4426-8fc4-4d8fcefbd49c == \d\5\b\8\b\9\a\e\-\6\7\6\d\-\4\4\2\6\-\8\f\c\4\-\4\d\8\f\c\e\f\b\d\4\9\c ]] 00:17:47.117 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:47.117 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:47.117 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:47.374 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 30b8720f-4b86-4e7a-8dc7-ba3a2b661e54 == \3\0\b\8\7\2\0\f\-\4\b\8\6\-\4\e\7\a\-\8\d\c\7\-\b\a\3\a\2\b\6\6\1\e\5\4 ]] 00:17:47.374 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d5b8b9ae-676d-4426-8fc4-4d8fcefbd49c 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D5B8B9AE676D44268FC44D8FCEFBD49C 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D5B8B9AE676D44268FC44D8FCEFBD49C 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:47.940 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D5B8B9AE676D44268FC44D8FCEFBD49C 00:17:48.198 [2024-11-26 18:13:36.166404] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:48.198 [2024-11-26 18:13:36.166444] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:48.198 [2024-11-26 18:13:36.166473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:48.198 request: 00:17:48.198 { 00:17:48.198 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.198 "namespace": { 00:17:48.198 "bdev_name": "invalid", 00:17:48.198 "nsid": 1, 00:17:48.198 "nguid": "D5B8B9AE676D44268FC44D8FCEFBD49C", 00:17:48.198 "no_auto_visible": false 00:17:48.198 }, 00:17:48.198 "method": "nvmf_subsystem_add_ns", 00:17:48.198 "req_id": 1 00:17:48.198 } 00:17:48.198 Got JSON-RPC error response 00:17:48.198 response: 00:17:48.198 { 00:17:48.198 "code": -32602, 00:17:48.198 "message": "Invalid parameters" 00:17:48.198 } 00:17:48.198 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:48.198 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.198 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.198 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.198 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d5b8b9ae-676d-4426-8fc4-4d8fcefbd49c 00:17:48.198 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:48.198 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D5B8B9AE676D44268FC44D8FCEFBD49C -i 00:17:48.456 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 580230 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 580230 ']' 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 580230 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580230 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580230' 00:17:50.983 killing process with pid 580230 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 580230 00:17:50.983 18:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 580230 00:17:51.241 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.806 rmmod nvme_tcp 00:17:51.806 rmmod nvme_fabrics 00:17:51.806 rmmod nvme_keyring 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 578725 ']' 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 578725 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 578725 ']' 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 578725 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578725 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578725' 00:17:51.806 killing process with pid 578725 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 578725 00:17:51.806 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 578725 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.065 18:13:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.974 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:53.974 00:17:53.974 real 0m24.783s 00:17:53.974 user 0m36.044s 00:17:53.974 sys 0m4.631s 00:17:53.974 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.974 18:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:53.974 ************************************ 00:17:53.974 END TEST nvmf_ns_masking 00:17:53.974 ************************************ 00:17:54.234 18:13:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:54.234 18:13:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:54.234 18:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:54.234 18:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.234 18:13:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:54.234 ************************************ 00:17:54.234 START TEST nvmf_nvme_cli 00:17:54.234 ************************************ 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:54.234 * Looking for test storage... 00:17:54.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:54.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.234 --rc genhtml_branch_coverage=1 00:17:54.234 --rc genhtml_function_coverage=1 00:17:54.234 --rc genhtml_legend=1 00:17:54.234 --rc geninfo_all_blocks=1 00:17:54.234 --rc geninfo_unexecuted_blocks=1 00:17:54.234 00:17:54.234 ' 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:54.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.234 --rc genhtml_branch_coverage=1 00:17:54.234 --rc genhtml_function_coverage=1 00:17:54.234 --rc genhtml_legend=1 00:17:54.234 --rc geninfo_all_blocks=1 00:17:54.234 --rc geninfo_unexecuted_blocks=1 00:17:54.234 00:17:54.234 ' 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:54.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.234 --rc genhtml_branch_coverage=1 00:17:54.234 --rc genhtml_function_coverage=1 00:17:54.234 --rc genhtml_legend=1 00:17:54.234 --rc geninfo_all_blocks=1 00:17:54.234 --rc geninfo_unexecuted_blocks=1 00:17:54.234 00:17:54.234 ' 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:54.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.234 --rc genhtml_branch_coverage=1 00:17:54.234 --rc genhtml_function_coverage=1 00:17:54.234 --rc genhtml_legend=1 00:17:54.234 --rc geninfo_all_blocks=1 00:17:54.234 --rc geninfo_unexecuted_blocks=1 00:17:54.234 00:17:54.234 ' 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.234 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:54.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:54.235 18:13:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:56.768 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:56.768 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:56.768 Found net devices under 0000:09:00.0: cvl_0_0 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:56.768 Found net devices under 0000:09:00.1: cvl_0_1 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.768 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:56.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:17:56.769 00:17:56.769 --- 10.0.0.2 ping statistics --- 00:17:56.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.769 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:17:56.769 00:17:56.769 --- 10.0.0.1 ping statistics --- 00:17:56.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.769 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=583184 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 583184 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 583184 ']' 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.769 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:56.769 [2024-11-26 18:13:44.610497] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:17:56.769 [2024-11-26 18:13:44.610574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.769 [2024-11-26 18:13:44.683628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.769 [2024-11-26 18:13:44.745388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.769 [2024-11-26 18:13:44.745440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.769 [2024-11-26 18:13:44.745453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.769 [2024-11-26 18:13:44.745464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.769 [2024-11-26 18:13:44.745473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.769 [2024-11-26 18:13:44.747075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.769 [2024-11-26 18:13:44.747140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.769 [2024-11-26 18:13:44.747206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.769 [2024-11-26 18:13:44.747209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 [2024-11-26 18:13:44.900818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 Malloc0 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 Malloc1 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 [2024-11-26 18:13:44.993860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.027 18:13:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.027 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.027 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:17:57.284 00:17:57.284 Discovery Log Number of Records 2, Generation counter 2 00:17:57.284 =====Discovery Log Entry 0====== 00:17:57.284 trtype: tcp 00:17:57.284 adrfam: ipv4 00:17:57.284 subtype: current discovery subsystem 00:17:57.284 treq: not required 00:17:57.284 portid: 0 00:17:57.284 trsvcid: 4420 00:17:57.284 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:57.284 traddr: 10.0.0.2 00:17:57.284 eflags: explicit discovery connections, duplicate discovery information 00:17:57.284 sectype: none 00:17:57.284 =====Discovery Log Entry 1====== 00:17:57.284 trtype: tcp 00:17:57.284 adrfam: ipv4 00:17:57.284 subtype: nvme subsystem 00:17:57.284 treq: not required 00:17:57.284 portid: 0 00:17:57.284 trsvcid: 4420 00:17:57.284 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:57.284 traddr: 10.0.0.2 00:17:57.284 eflags: none 00:17:57.284 sectype: none 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:57.284 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:57.848 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:57.848 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:17:57.848 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:57.848 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:57.848 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:57.848 18:13:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:00.374 /dev/nvme0n2 ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:00.374 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:00.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:00.374 rmmod nvme_tcp 00:18:00.374 rmmod nvme_fabrics 00:18:00.374 rmmod nvme_keyring 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 583184 ']' 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 583184 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 583184 ']' 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 583184 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 583184 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 583184' 00:18:00.374 killing process with pid 583184 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 583184 00:18:00.374 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 583184 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.634 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.538 00:18:02.538 real 0m8.476s 00:18:02.538 user 0m15.354s 00:18:02.538 sys 0m2.431s 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:02.538 ************************************ 00:18:02.538 END TEST nvmf_nvme_cli 00:18:02.538 ************************************ 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.538 ************************************ 00:18:02.538 START TEST nvmf_vfio_user 00:18:02.538 ************************************ 00:18:02.538 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:02.797 * Looking for test storage... 00:18:02.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.797 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.797 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.797 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.797 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.797 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.797 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.797 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.798 --rc genhtml_branch_coverage=1 00:18:02.798 --rc genhtml_function_coverage=1 00:18:02.798 --rc genhtml_legend=1 00:18:02.798 --rc geninfo_all_blocks=1 00:18:02.798 --rc geninfo_unexecuted_blocks=1 00:18:02.798 00:18:02.798 ' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.798 --rc genhtml_branch_coverage=1 00:18:02.798 --rc genhtml_function_coverage=1 00:18:02.798 --rc genhtml_legend=1 00:18:02.798 --rc geninfo_all_blocks=1 00:18:02.798 --rc geninfo_unexecuted_blocks=1 00:18:02.798 00:18:02.798 ' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.798 --rc genhtml_branch_coverage=1 00:18:02.798 --rc genhtml_function_coverage=1 00:18:02.798 --rc genhtml_legend=1 00:18:02.798 --rc geninfo_all_blocks=1 00:18:02.798 --rc geninfo_unexecuted_blocks=1 00:18:02.798 00:18:02.798 ' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.798 --rc genhtml_branch_coverage=1 00:18:02.798 --rc genhtml_function_coverage=1 00:18:02.798 --rc genhtml_legend=1 00:18:02.798 --rc geninfo_all_blocks=1 00:18:02.798 --rc geninfo_unexecuted_blocks=1 00:18:02.798 00:18:02.798 ' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:02.798 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=584084 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 584084' 00:18:02.799 Process pid: 584084 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 584084 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 584084 ']' 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.799 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:02.799 [2024-11-26 18:13:50.755239] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:18:02.799 [2024-11-26 18:13:50.755360] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.057 [2024-11-26 18:13:50.822840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.057 [2024-11-26 18:13:50.878592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.057 [2024-11-26 18:13:50.878646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.057 [2024-11-26 18:13:50.878667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.057 [2024-11-26 18:13:50.878677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.057 [2024-11-26 18:13:50.878687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.057 [2024-11-26 18:13:50.880131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.057 [2024-11-26 18:13:50.880236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.057 [2024-11-26 18:13:50.880325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.057 [2024-11-26 18:13:50.880330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.057 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.057 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:03.057 18:13:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:04.431 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:04.431 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:04.431 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:04.431 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:04.431 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:04.431 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:04.690 Malloc1 00:18:04.690 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:04.948 18:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:05.206 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:05.772 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:05.772 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:05.772 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:06.030 Malloc2 00:18:06.030 18:13:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:06.287 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:06.545 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:06.803 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:06.803 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:06.803 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:06.803 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:06.803 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:06.803 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:06.803 [2024-11-26 18:13:54.722008] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:18:06.803 [2024-11-26 18:13:54.722051] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid584509 ] 00:18:06.803 [2024-11-26 18:13:54.772677] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:06.803 [2024-11-26 18:13:54.781834] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:06.803 [2024-11-26 18:13:54.781867] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f81ac0c9000 00:18:06.803 [2024-11-26 18:13:54.782829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.803 [2024-11-26 18:13:54.783826] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.803 [2024-11-26 18:13:54.784833] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.803 [2024-11-26 18:13:54.785839] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:06.803 [2024-11-26 18:13:54.786843] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:06.803 [2024-11-26 18:13:54.787848] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.803 [2024-11-26 18:13:54.788850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:06.803 [2024-11-26 18:13:54.789858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:06.803 [2024-11-26 18:13:54.790863] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:06.803 [2024-11-26 18:13:54.790884] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f81ac0be000 00:18:06.803 [2024-11-26 18:13:54.792001] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:06.804 [2024-11-26 18:13:54.807658] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:06.804 [2024-11-26 18:13:54.807698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:06.804 [2024-11-26 18:13:54.809986] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:06.804 [2024-11-26 18:13:54.810039] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:06.804 [2024-11-26 18:13:54.810131] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:06.804 [2024-11-26 18:13:54.810158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:06.804 [2024-11-26 18:13:54.810184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:06.804 [2024-11-26 18:13:54.810977] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:06.804 [2024-11-26 18:13:54.811004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:06.804 [2024-11-26 18:13:54.811019] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:06.804 [2024-11-26 18:13:54.811976] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:06.804 [2024-11-26 18:13:54.811996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:06.804 [2024-11-26 18:13:54.812010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:07.063 [2024-11-26 18:13:54.812984] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:07.063 [2024-11-26 18:13:54.813005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:07.063 [2024-11-26 18:13:54.813988] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:07.063 [2024-11-26 18:13:54.814006] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:07.063 [2024-11-26 18:13:54.814015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:07.063 [2024-11-26 18:13:54.814026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:07.063 [2024-11-26 18:13:54.814146] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:07.064 [2024-11-26 18:13:54.814159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:07.064 [2024-11-26 18:13:54.814168] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:07.064 [2024-11-26 18:13:54.814998] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:07.064 [2024-11-26 18:13:54.816004] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:07.064 [2024-11-26 18:13:54.817007] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:07.064 [2024-11-26 18:13:54.818006] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:07.064 [2024-11-26 18:13:54.818144] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:07.064 [2024-11-26 18:13:54.819018] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:07.064 [2024-11-26 18:13:54.819036] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:07.064 [2024-11-26 18:13:54.819045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819068] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:07.064 [2024-11-26 18:13:54.819086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819117] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:07.064 [2024-11-26 18:13:54.819128] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:07.064 [2024-11-26 18:13:54.819134] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.064 [2024-11-26 18:13:54.819161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.819246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:07.064 [2024-11-26 18:13:54.819262] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:07.064 [2024-11-26 18:13:54.819281] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:07.064 [2024-11-26 18:13:54.819311] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:07.064 [2024-11-26 18:13:54.819320] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:07.064 [2024-11-26 18:13:54.819328] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:07.064 [2024-11-26 18:13:54.819336] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:07.064 [2024-11-26 18:13:54.819367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.819415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:07.064 [2024-11-26 18:13:54.819432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.064 [2024-11-26 18:13:54.819445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.064 [2024-11-26 18:13:54.819458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.064 [2024-11-26 18:13:54.819471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:07.064 [2024-11-26 18:13:54.819480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.819524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:07.064 [2024-11-26 18:13:54.819534] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:07.064 [2024-11-26 18:13:54.819543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819597] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.819610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:07.064 [2024-11-26 18:13:54.819691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819722] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:07.064 [2024-11-26 18:13:54.819730] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:07.064 [2024-11-26 18:13:54.819736] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.064 [2024-11-26 18:13:54.819745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.819765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:07.064 [2024-11-26 18:13:54.819786] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:07.064 [2024-11-26 18:13:54.819801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819831] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:07.064 [2024-11-26 18:13:54.819840] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:07.064 [2024-11-26 18:13:54.819845] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.064 [2024-11-26 18:13:54.819854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.819885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:07.064 [2024-11-26 18:13:54.819902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819927] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:07.064 [2024-11-26 18:13:54.819935] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:07.064 [2024-11-26 18:13:54.819941] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.064 [2024-11-26 18:13:54.819950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.819963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:07.064 [2024-11-26 18:13:54.819982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.819994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.820007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.820017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.820025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.820033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.820041] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:07.064 [2024-11-26 18:13:54.820048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:07.064 [2024-11-26 18:13:54.820056] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:07.064 [2024-11-26 18:13:54.820080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.820099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:07.064 [2024-11-26 18:13:54.820117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:07.064 [2024-11-26 18:13:54.820129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:07.065 [2024-11-26 18:13:54.820150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:07.065 [2024-11-26 18:13:54.820165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:07.065 [2024-11-26 18:13:54.820181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:07.065 [2024-11-26 18:13:54.820193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:07.065 [2024-11-26 18:13:54.820217] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:07.065 [2024-11-26 18:13:54.820228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:07.065 [2024-11-26 18:13:54.820234] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:07.065 [2024-11-26 18:13:54.820240] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:07.065 [2024-11-26 18:13:54.820246] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:07.065 [2024-11-26 18:13:54.820254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:07.065 [2024-11-26 18:13:54.820266] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:07.065 [2024-11-26 18:13:54.820274] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:07.065 [2024-11-26 18:13:54.820295] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.065 [2024-11-26 18:13:54.820316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:07.065 [2024-11-26 18:13:54.820331] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:07.065 [2024-11-26 18:13:54.820339] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:07.065 [2024-11-26 18:13:54.820345] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.065 [2024-11-26 18:13:54.820361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:07.065 [2024-11-26 18:13:54.820374] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:07.065 [2024-11-26 18:13:54.820382] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:07.065 [2024-11-26 18:13:54.820388] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:07.065 [2024-11-26 18:13:54.820397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:07.065 [2024-11-26 18:13:54.820409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:07.065 [2024-11-26 18:13:54.820430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:07.065 [2024-11-26 18:13:54.820449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:07.065 [2024-11-26 18:13:54.820462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:07.065 ===================================================== 00:18:07.065 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:07.065 ===================================================== 00:18:07.065 Controller Capabilities/Features 00:18:07.065 ================================ 00:18:07.065 Vendor ID: 4e58 00:18:07.065 Subsystem Vendor ID: 4e58 00:18:07.065 Serial Number: SPDK1 00:18:07.065 Model Number: SPDK bdev Controller 00:18:07.065 Firmware Version: 25.01 00:18:07.065 Recommended Arb Burst: 6 00:18:07.065 IEEE OUI Identifier: 8d 6b 50 00:18:07.065 Multi-path I/O 00:18:07.065 May have multiple subsystem ports: Yes 00:18:07.065 May have multiple controllers: Yes 00:18:07.065 Associated with SR-IOV VF: No 00:18:07.065 Max Data Transfer Size: 131072 00:18:07.065 Max Number of Namespaces: 32 00:18:07.065 Max Number of I/O Queues: 127 00:18:07.065 NVMe Specification Version (VS): 1.3 00:18:07.065 NVMe Specification Version (Identify): 1.3 00:18:07.065 Maximum Queue Entries: 256 00:18:07.065 Contiguous Queues Required: Yes 00:18:07.065 Arbitration Mechanisms Supported 00:18:07.065 Weighted Round Robin: Not Supported 00:18:07.065 Vendor Specific: Not Supported 00:18:07.065 Reset Timeout: 15000 ms 00:18:07.065 Doorbell Stride: 4 bytes 00:18:07.065 NVM Subsystem Reset: Not Supported 00:18:07.065 Command Sets Supported 00:18:07.065 NVM Command Set: Supported 00:18:07.065 Boot Partition: Not Supported 00:18:07.065 Memory Page Size Minimum: 4096 bytes 00:18:07.065 Memory Page Size Maximum: 4096 bytes 00:18:07.065 Persistent Memory Region: Not Supported 00:18:07.065 Optional Asynchronous Events Supported 00:18:07.065 Namespace Attribute Notices: Supported 00:18:07.065 Firmware Activation Notices: Not Supported 00:18:07.065 ANA Change Notices: Not Supported 00:18:07.065 PLE Aggregate Log Change Notices: Not Supported 00:18:07.065 LBA Status Info Alert Notices: Not Supported 00:18:07.065 EGE Aggregate Log Change Notices: Not Supported 00:18:07.065 Normal NVM Subsystem Shutdown event: Not Supported 00:18:07.065 Zone Descriptor Change Notices: Not Supported 00:18:07.065 Discovery Log Change Notices: Not Supported 00:18:07.065 Controller Attributes 00:18:07.065 128-bit Host Identifier: Supported 00:18:07.065 Non-Operational Permissive Mode: Not Supported 00:18:07.065 NVM Sets: Not Supported 00:18:07.065 Read Recovery Levels: Not Supported 00:18:07.065 Endurance Groups: Not Supported 00:18:07.065 Predictable Latency Mode: Not Supported 00:18:07.065 Traffic Based Keep ALive: Not Supported 00:18:07.065 Namespace Granularity: Not Supported 00:18:07.065 SQ Associations: Not Supported 00:18:07.065 UUID List: Not Supported 00:18:07.065 Multi-Domain Subsystem: Not Supported 00:18:07.065 Fixed Capacity Management: Not Supported 00:18:07.065 Variable Capacity Management: Not Supported 00:18:07.065 Delete Endurance Group: Not Supported 00:18:07.065 Delete NVM Set: Not Supported 00:18:07.065 Extended LBA Formats Supported: Not Supported 00:18:07.065 Flexible Data Placement Supported: Not Supported 00:18:07.065 00:18:07.065 Controller Memory Buffer Support 00:18:07.065 ================================ 00:18:07.065 Supported: No 00:18:07.065 00:18:07.065 Persistent Memory Region Support 00:18:07.065 ================================ 00:18:07.065 Supported: No 00:18:07.065 00:18:07.065 Admin Command Set Attributes 00:18:07.065 ============================ 00:18:07.065 Security Send/Receive: Not Supported 00:18:07.065 Format NVM: Not Supported 00:18:07.065 Firmware Activate/Download: Not Supported 00:18:07.065 Namespace Management: Not Supported 00:18:07.065 Device Self-Test: Not Supported 00:18:07.065 Directives: Not Supported 00:18:07.065 NVMe-MI: Not Supported 00:18:07.065 Virtualization Management: Not Supported 00:18:07.065 Doorbell Buffer Config: Not Supported 00:18:07.065 Get LBA Status Capability: Not Supported 00:18:07.065 Command & Feature Lockdown Capability: Not Supported 00:18:07.065 Abort Command Limit: 4 00:18:07.065 Async Event Request Limit: 4 00:18:07.065 Number of Firmware Slots: N/A 00:18:07.065 Firmware Slot 1 Read-Only: N/A 00:18:07.065 Firmware Activation Without Reset: N/A 00:18:07.065 Multiple Update Detection Support: N/A 00:18:07.065 Firmware Update Granularity: No Information Provided 00:18:07.065 Per-Namespace SMART Log: No 00:18:07.065 Asymmetric Namespace Access Log Page: Not Supported 00:18:07.065 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:07.065 Command Effects Log Page: Supported 00:18:07.065 Get Log Page Extended Data: Supported 00:18:07.065 Telemetry Log Pages: Not Supported 00:18:07.065 Persistent Event Log Pages: Not Supported 00:18:07.065 Supported Log Pages Log Page: May Support 00:18:07.065 Commands Supported & Effects Log Page: Not Supported 00:18:07.065 Feature Identifiers & Effects Log Page:May Support 00:18:07.065 NVMe-MI Commands & Effects Log Page: May Support 00:18:07.065 Data Area 4 for Telemetry Log: Not Supported 00:18:07.065 Error Log Page Entries Supported: 128 00:18:07.065 Keep Alive: Supported 00:18:07.065 Keep Alive Granularity: 10000 ms 00:18:07.065 00:18:07.065 NVM Command Set Attributes 00:18:07.065 ========================== 00:18:07.065 Submission Queue Entry Size 00:18:07.065 Max: 64 00:18:07.065 Min: 64 00:18:07.065 Completion Queue Entry Size 00:18:07.065 Max: 16 00:18:07.065 Min: 16 00:18:07.065 Number of Namespaces: 32 00:18:07.065 Compare Command: Supported 00:18:07.065 Write Uncorrectable Command: Not Supported 00:18:07.065 Dataset Management Command: Supported 00:18:07.065 Write Zeroes Command: Supported 00:18:07.065 Set Features Save Field: Not Supported 00:18:07.065 Reservations: Not Supported 00:18:07.065 Timestamp: Not Supported 00:18:07.065 Copy: Supported 00:18:07.065 Volatile Write Cache: Present 00:18:07.065 Atomic Write Unit (Normal): 1 00:18:07.065 Atomic Write Unit (PFail): 1 00:18:07.065 Atomic Compare & Write Unit: 1 00:18:07.065 Fused Compare & Write: Supported 00:18:07.065 Scatter-Gather List 00:18:07.066 SGL Command Set: Supported (Dword aligned) 00:18:07.066 SGL Keyed: Not Supported 00:18:07.066 SGL Bit Bucket Descriptor: Not Supported 00:18:07.066 SGL Metadata Pointer: Not Supported 00:18:07.066 Oversized SGL: Not Supported 00:18:07.066 SGL Metadata Address: Not Supported 00:18:07.066 SGL Offset: Not Supported 00:18:07.066 Transport SGL Data Block: Not Supported 00:18:07.066 Replay Protected Memory Block: Not Supported 00:18:07.066 00:18:07.066 Firmware Slot Information 00:18:07.066 ========================= 00:18:07.066 Active slot: 1 00:18:07.066 Slot 1 Firmware Revision: 25.01 00:18:07.066 00:18:07.066 00:18:07.066 Commands Supported and Effects 00:18:07.066 ============================== 00:18:07.066 Admin Commands 00:18:07.066 -------------- 00:18:07.066 Get Log Page (02h): Supported 00:18:07.066 Identify (06h): Supported 00:18:07.066 Abort (08h): Supported 00:18:07.066 Set Features (09h): Supported 00:18:07.066 Get Features (0Ah): Supported 00:18:07.066 Asynchronous Event Request (0Ch): Supported 00:18:07.066 Keep Alive (18h): Supported 00:18:07.066 I/O Commands 00:18:07.066 ------------ 00:18:07.066 Flush (00h): Supported LBA-Change 00:18:07.066 Write (01h): Supported LBA-Change 00:18:07.066 Read (02h): Supported 00:18:07.066 Compare (05h): Supported 00:18:07.066 Write Zeroes (08h): Supported LBA-Change 00:18:07.066 Dataset Management (09h): Supported LBA-Change 00:18:07.066 Copy (19h): Supported LBA-Change 00:18:07.066 00:18:07.066 Error Log 00:18:07.066 ========= 00:18:07.066 00:18:07.066 Arbitration 00:18:07.066 =========== 00:18:07.066 Arbitration Burst: 1 00:18:07.066 00:18:07.066 Power Management 00:18:07.066 ================ 00:18:07.066 Number of Power States: 1 00:18:07.066 Current Power State: Power State #0 00:18:07.066 Power State #0: 00:18:07.066 Max Power: 0.00 W 00:18:07.066 Non-Operational State: Operational 00:18:07.066 Entry Latency: Not Reported 00:18:07.066 Exit Latency: Not Reported 00:18:07.066 Relative Read Throughput: 0 00:18:07.066 Relative Read Latency: 0 00:18:07.066 Relative Write Throughput: 0 00:18:07.066 Relative Write Latency: 0 00:18:07.066 Idle Power: Not Reported 00:18:07.066 Active Power: Not Reported 00:18:07.066 Non-Operational Permissive Mode: Not Supported 00:18:07.066 00:18:07.066 Health Information 00:18:07.066 ================== 00:18:07.066 Critical Warnings: 00:18:07.066 Available Spare Space: OK 00:18:07.066 Temperature: OK 00:18:07.066 Device Reliability: OK 00:18:07.066 Read Only: No 00:18:07.066 Volatile Memory Backup: OK 00:18:07.066 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:07.066 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:07.066 Available Spare: 0% 00:18:07.066 Available Sp[2024-11-26 18:13:54.820581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:07.066 [2024-11-26 18:13:54.820598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:07.066 [2024-11-26 18:13:54.820642] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:07.066 [2024-11-26 18:13:54.820680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.066 [2024-11-26 18:13:54.820693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.066 [2024-11-26 18:13:54.820702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.066 [2024-11-26 18:13:54.820712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:07.066 [2024-11-26 18:13:54.823316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:07.066 [2024-11-26 18:13:54.823339] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:07.066 [2024-11-26 18:13:54.824034] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:07.066 [2024-11-26 18:13:54.824112] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:07.066 [2024-11-26 18:13:54.824126] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:07.066 [2024-11-26 18:13:54.825048] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:07.066 [2024-11-26 18:13:54.825072] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:07.066 [2024-11-26 18:13:54.825131] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:07.066 [2024-11-26 18:13:54.827088] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:07.066 are Threshold: 0% 00:18:07.066 Life Percentage Used: 0% 00:18:07.066 Data Units Read: 0 00:18:07.066 Data Units Written: 0 00:18:07.066 Host Read Commands: 0 00:18:07.066 Host Write Commands: 0 00:18:07.066 Controller Busy Time: 0 minutes 00:18:07.066 Power Cycles: 0 00:18:07.066 Power On Hours: 0 hours 00:18:07.066 Unsafe Shutdowns: 0 00:18:07.066 Unrecoverable Media Errors: 0 00:18:07.066 Lifetime Error Log Entries: 0 00:18:07.066 Warning Temperature Time: 0 minutes 00:18:07.066 Critical Temperature Time: 0 minutes 00:18:07.066 00:18:07.066 Number of Queues 00:18:07.066 ================ 00:18:07.066 Number of I/O Submission Queues: 127 00:18:07.066 Number of I/O Completion Queues: 127 00:18:07.066 00:18:07.066 Active Namespaces 00:18:07.066 ================= 00:18:07.066 Namespace ID:1 00:18:07.066 Error Recovery Timeout: Unlimited 00:18:07.066 Command Set Identifier: NVM (00h) 00:18:07.066 Deallocate: Supported 00:18:07.066 Deallocated/Unwritten Error: Not Supported 00:18:07.066 Deallocated Read Value: Unknown 00:18:07.066 Deallocate in Write Zeroes: Not Supported 00:18:07.066 Deallocated Guard Field: 0xFFFF 00:18:07.066 Flush: Supported 00:18:07.066 Reservation: Supported 00:18:07.066 Namespace Sharing Capabilities: Multiple Controllers 00:18:07.066 Size (in LBAs): 131072 (0GiB) 00:18:07.066 Capacity (in LBAs): 131072 (0GiB) 00:18:07.066 Utilization (in LBAs): 131072 (0GiB) 00:18:07.066 NGUID: 2C8B5B6D070B437C8796F3C71A9682CA 00:18:07.066 UUID: 2c8b5b6d-070b-437c-8796-f3c71a9682ca 00:18:07.066 Thin Provisioning: Not Supported 00:18:07.066 Per-NS Atomic Units: Yes 00:18:07.066 Atomic Boundary Size (Normal): 0 00:18:07.066 Atomic Boundary Size (PFail): 0 00:18:07.066 Atomic Boundary Offset: 0 00:18:07.066 Maximum Single Source Range Length: 65535 00:18:07.066 Maximum Copy Length: 65535 00:18:07.066 Maximum Source Range Count: 1 00:18:07.066 NGUID/EUI64 Never Reused: No 00:18:07.066 Namespace Write Protected: No 00:18:07.066 Number of LBA Formats: 1 00:18:07.066 Current LBA Format: LBA Format #00 00:18:07.066 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:07.066 00:18:07.066 18:13:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:07.326 [2024-11-26 18:13:55.077207] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:12.587 Initializing NVMe Controllers 00:18:12.587 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:12.587 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:12.588 Initialization complete. Launching workers. 00:18:12.588 ======================================================== 00:18:12.588 Latency(us) 00:18:12.588 Device Information : IOPS MiB/s Average min max 00:18:12.588 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33051.10 129.11 3871.66 1200.03 8663.63 00:18:12.588 ======================================================== 00:18:12.588 Total : 33051.10 129.11 3871.66 1200.03 8663.63 00:18:12.588 00:18:12.588 [2024-11-26 18:14:00.094474] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:12.588 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:12.588 [2024-11-26 18:14:00.358793] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:17.922 Initializing NVMe Controllers 00:18:17.922 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:17.922 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:17.922 Initialization complete. Launching workers. 00:18:17.922 ======================================================== 00:18:17.922 Latency(us) 00:18:17.923 Device Information : IOPS MiB/s Average min max 00:18:17.923 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.72 62.68 7976.42 6958.48 8990.70 00:18:17.923 ======================================================== 00:18:17.923 Total : 16045.72 62.68 7976.42 6958.48 8990.70 00:18:17.923 00:18:17.923 [2024-11-26 18:14:05.394226] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:17.923 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:17.923 [2024-11-26 18:14:05.617371] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:23.185 [2024-11-26 18:14:10.683594] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:23.185 Initializing NVMe Controllers 00:18:23.185 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:23.185 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:23.185 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:23.185 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:23.185 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:23.185 Initialization complete. Launching workers. 00:18:23.185 Starting thread on core 2 00:18:23.185 Starting thread on core 3 00:18:23.185 Starting thread on core 1 00:18:23.185 18:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:23.185 [2024-11-26 18:14:10.993532] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.465 [2024-11-26 18:14:14.059716] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.465 Initializing NVMe Controllers 00:18:26.465 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:26.465 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:26.465 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:26.465 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:26.465 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:26.465 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:26.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:26.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:26.466 Initialization complete. Launching workers. 00:18:26.466 Starting thread on core 1 with urgent priority queue 00:18:26.466 Starting thread on core 2 with urgent priority queue 00:18:26.466 Starting thread on core 3 with urgent priority queue 00:18:26.466 Starting thread on core 0 with urgent priority queue 00:18:26.466 SPDK bdev Controller (SPDK1 ) core 0: 2742.67 IO/s 36.46 secs/100000 ios 00:18:26.466 SPDK bdev Controller (SPDK1 ) core 1: 2645.00 IO/s 37.81 secs/100000 ios 00:18:26.466 SPDK bdev Controller (SPDK1 ) core 2: 2683.67 IO/s 37.26 secs/100000 ios 00:18:26.466 SPDK bdev Controller (SPDK1 ) core 3: 2119.33 IO/s 47.18 secs/100000 ios 00:18:26.466 ======================================================== 00:18:26.466 00:18:26.466 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:26.466 [2024-11-26 18:14:14.377251] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.466 Initializing NVMe Controllers 00:18:26.466 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:26.466 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:26.466 Namespace ID: 1 size: 0GB 00:18:26.466 Initialization complete. 00:18:26.466 INFO: using host memory buffer for IO 00:18:26.466 Hello world! 00:18:26.466 [2024-11-26 18:14:14.413000] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.466 18:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:27.031 [2024-11-26 18:14:14.733802] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:27.964 Initializing NVMe Controllers 00:18:27.964 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:27.964 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:27.964 Initialization complete. Launching workers. 00:18:27.965 submit (in ns) avg, min, max = 7238.6, 3547.8, 4017642.2 00:18:27.965 complete (in ns) avg, min, max = 27951.2, 2063.3, 4017464.4 00:18:27.965 00:18:27.965 Submit histogram 00:18:27.965 ================ 00:18:27.965 Range in us Cumulative Count 00:18:27.965 3.532 - 3.556: 0.0530% ( 7) 00:18:27.965 3.556 - 3.579: 1.8619% ( 239) 00:18:27.965 3.579 - 3.603: 6.8271% ( 656) 00:18:27.965 3.603 - 3.627: 16.5002% ( 1278) 00:18:27.965 3.627 - 3.650: 28.4363% ( 1577) 00:18:27.965 3.650 - 3.674: 39.8577% ( 1509) 00:18:27.965 3.674 - 3.698: 47.0557% ( 951) 00:18:27.965 3.698 - 3.721: 53.1638% ( 807) 00:18:27.965 3.721 - 3.745: 58.1063% ( 653) 00:18:27.965 3.745 - 3.769: 63.5332% ( 717) 00:18:27.965 3.769 - 3.793: 67.7793% ( 561) 00:18:27.965 3.793 - 3.816: 71.0642% ( 434) 00:18:27.965 3.816 - 3.840: 73.9479% ( 381) 00:18:27.965 3.840 - 3.864: 77.6188% ( 485) 00:18:27.965 3.864 - 3.887: 81.7590% ( 547) 00:18:27.965 3.887 - 3.911: 85.1045% ( 442) 00:18:27.965 3.911 - 3.935: 87.3146% ( 292) 00:18:27.965 3.935 - 3.959: 88.9116% ( 211) 00:18:27.965 3.959 - 3.982: 90.5313% ( 214) 00:18:27.965 3.982 - 4.006: 92.1586% ( 215) 00:18:27.965 4.006 - 4.030: 93.1653% ( 133) 00:18:27.965 4.030 - 4.053: 94.0433% ( 116) 00:18:27.965 4.053 - 4.077: 94.8683% ( 109) 00:18:27.965 4.077 - 4.101: 95.5041% ( 84) 00:18:27.965 4.101 - 4.124: 96.0566% ( 73) 00:18:27.965 4.124 - 4.148: 96.4048% ( 46) 00:18:27.965 4.148 - 4.172: 96.6697% ( 35) 00:18:27.965 4.172 - 4.196: 96.8286% ( 21) 00:18:27.965 4.196 - 4.219: 96.9573% ( 17) 00:18:27.965 4.219 - 4.243: 96.9800% ( 3) 00:18:27.965 4.243 - 4.267: 97.0254% ( 6) 00:18:27.965 4.267 - 4.290: 97.1390% ( 15) 00:18:27.965 4.290 - 4.314: 97.2147% ( 10) 00:18:27.965 4.314 - 4.338: 97.3509% ( 18) 00:18:27.965 4.338 - 4.361: 97.4039% ( 7) 00:18:27.965 4.361 - 4.385: 97.4796% ( 10) 00:18:27.965 4.385 - 4.409: 97.5704% ( 12) 00:18:27.965 4.409 - 4.433: 97.6007% ( 4) 00:18:27.965 4.433 - 4.456: 97.6158% ( 2) 00:18:27.965 4.456 - 4.480: 97.6385% ( 3) 00:18:27.965 4.480 - 4.504: 97.6461% ( 1) 00:18:27.965 4.551 - 4.575: 97.6536% ( 1) 00:18:27.965 4.575 - 4.599: 97.6688% ( 2) 00:18:27.965 4.599 - 4.622: 97.6915% ( 3) 00:18:27.965 4.622 - 4.646: 97.7142% ( 3) 00:18:27.965 4.646 - 4.670: 97.7369% ( 3) 00:18:27.965 4.670 - 4.693: 97.7823% ( 6) 00:18:27.965 4.693 - 4.717: 97.8429% ( 8) 00:18:27.965 4.717 - 4.741: 97.8883% ( 6) 00:18:27.965 4.741 - 4.764: 97.9564% ( 9) 00:18:27.965 4.764 - 4.788: 98.0245% ( 9) 00:18:27.965 4.788 - 4.812: 98.0321% ( 1) 00:18:27.965 4.812 - 4.836: 98.1002% ( 9) 00:18:27.965 4.836 - 4.859: 98.1456% ( 6) 00:18:27.965 4.859 - 4.883: 98.1759% ( 4) 00:18:27.965 4.883 - 4.907: 98.1986% ( 3) 00:18:27.965 4.907 - 4.930: 98.2289% ( 4) 00:18:27.965 4.930 - 4.954: 98.2516% ( 3) 00:18:27.965 4.954 - 4.978: 98.3121% ( 8) 00:18:27.965 4.978 - 5.001: 98.3197% ( 1) 00:18:27.965 5.001 - 5.025: 98.3651% ( 6) 00:18:27.965 5.025 - 5.049: 98.3954% ( 4) 00:18:27.965 5.049 - 5.073: 98.4030% ( 1) 00:18:27.965 5.073 - 5.096: 98.4408% ( 5) 00:18:27.965 5.120 - 5.144: 98.4484% ( 1) 00:18:27.965 5.215 - 5.239: 98.4559% ( 1) 00:18:27.965 5.239 - 5.262: 98.4711% ( 2) 00:18:27.965 5.262 - 5.286: 98.4787% ( 1) 00:18:27.965 5.286 - 5.310: 98.4862% ( 1) 00:18:27.965 5.310 - 5.333: 98.4938% ( 1) 00:18:27.965 5.357 - 5.381: 98.5014% ( 1) 00:18:27.965 5.452 - 5.476: 98.5165% ( 2) 00:18:27.965 5.476 - 5.499: 98.5241% ( 1) 00:18:27.965 5.570 - 5.594: 98.5316% ( 1) 00:18:27.965 5.689 - 5.713: 98.5468% ( 2) 00:18:27.965 5.855 - 5.879: 98.5543% ( 1) 00:18:27.965 5.950 - 5.973: 98.5695% ( 2) 00:18:27.965 5.973 - 5.997: 98.5771% ( 1) 00:18:27.965 6.116 - 6.163: 98.5846% ( 1) 00:18:27.965 6.163 - 6.210: 98.5922% ( 1) 00:18:27.965 6.210 - 6.258: 98.6073% ( 2) 00:18:27.965 6.305 - 6.353: 98.6149% ( 1) 00:18:27.965 6.400 - 6.447: 98.6225% ( 1) 00:18:27.965 6.827 - 6.874: 98.6300% ( 1) 00:18:27.965 6.921 - 6.969: 98.6376% ( 1) 00:18:27.965 7.111 - 7.159: 98.6452% ( 1) 00:18:27.965 7.253 - 7.301: 98.6603% ( 2) 00:18:27.965 7.301 - 7.348: 98.6679% ( 1) 00:18:27.965 7.490 - 7.538: 98.6754% ( 1) 00:18:27.965 7.538 - 7.585: 98.6906% ( 2) 00:18:27.965 7.680 - 7.727: 98.6982% ( 1) 00:18:27.965 7.727 - 7.775: 98.7057% ( 1) 00:18:27.965 7.964 - 8.012: 98.7133% ( 1) 00:18:27.965 8.107 - 8.154: 98.7360% ( 3) 00:18:27.965 8.249 - 8.296: 98.7436% ( 1) 00:18:27.965 8.296 - 8.344: 98.7511% ( 1) 00:18:27.965 8.391 - 8.439: 98.7587% ( 1) 00:18:27.965 8.439 - 8.486: 98.7738% ( 2) 00:18:27.965 8.486 - 8.533: 98.7814% ( 1) 00:18:27.965 8.533 - 8.581: 98.7890% ( 1) 00:18:27.965 8.581 - 8.628: 98.7965% ( 1) 00:18:27.965 8.628 - 8.676: 98.8041% ( 1) 00:18:27.965 8.676 - 8.723: 98.8117% ( 1) 00:18:27.965 8.723 - 8.770: 98.8268% ( 2) 00:18:27.965 8.818 - 8.865: 98.8420% ( 2) 00:18:27.965 8.960 - 9.007: 98.8495% ( 1) 00:18:27.965 9.292 - 9.339: 98.8571% ( 1) 00:18:27.965 9.339 - 9.387: 98.8722% ( 2) 00:18:27.965 9.434 - 9.481: 98.8874% ( 2) 00:18:27.965 9.624 - 9.671: 98.9025% ( 2) 00:18:27.965 9.671 - 9.719: 98.9101% ( 1) 00:18:27.965 9.719 - 9.766: 98.9177% ( 1) 00:18:27.965 9.861 - 9.908: 98.9252% ( 1) 00:18:27.965 10.003 - 10.050: 98.9328% ( 1) 00:18:27.965 10.287 - 10.335: 98.9404% ( 1) 00:18:27.965 10.335 - 10.382: 98.9479% ( 1) 00:18:27.965 10.572 - 10.619: 98.9631% ( 2) 00:18:27.965 10.619 - 10.667: 98.9782% ( 2) 00:18:27.965 10.761 - 10.809: 98.9858% ( 1) 00:18:27.965 11.425 - 11.473: 98.9933% ( 1) 00:18:27.965 11.710 - 11.757: 99.0085% ( 2) 00:18:27.965 11.947 - 11.994: 99.0236% ( 2) 00:18:27.965 12.231 - 12.326: 99.0312% ( 1) 00:18:27.965 13.274 - 13.369: 99.0388% ( 1) 00:18:27.965 13.464 - 13.559: 99.0539% ( 2) 00:18:27.965 14.222 - 14.317: 99.0615% ( 1) 00:18:27.965 14.601 - 14.696: 99.0690% ( 1) 00:18:27.965 15.076 - 15.170: 99.0766% ( 1) 00:18:27.965 15.360 - 15.455: 99.0842% ( 1) 00:18:27.965 16.877 - 16.972: 99.0917% ( 1) 00:18:27.965 17.067 - 17.161: 99.1069% ( 2) 00:18:27.965 17.161 - 17.256: 99.1296% ( 3) 00:18:27.965 17.256 - 17.351: 99.1447% ( 2) 00:18:27.965 17.446 - 17.541: 99.1977% ( 7) 00:18:27.965 17.541 - 17.636: 99.2355% ( 5) 00:18:27.965 17.636 - 17.730: 99.2885% ( 7) 00:18:27.965 17.730 - 17.825: 99.3188% ( 4) 00:18:27.965 17.825 - 17.920: 99.3642% ( 6) 00:18:27.965 17.920 - 18.015: 99.4323% ( 9) 00:18:27.965 18.015 - 18.110: 99.4475% ( 2) 00:18:27.965 18.110 - 18.204: 99.5307% ( 11) 00:18:27.965 18.204 - 18.299: 99.5686% ( 5) 00:18:27.965 18.299 - 18.394: 99.6140% ( 6) 00:18:27.965 18.394 - 18.489: 99.6972% ( 11) 00:18:27.965 18.489 - 18.584: 99.7351% ( 5) 00:18:27.965 18.584 - 18.679: 99.7956% ( 8) 00:18:27.965 18.679 - 18.773: 99.8259% ( 4) 00:18:27.965 18.773 - 18.868: 99.8562% ( 4) 00:18:27.965 18.868 - 18.963: 99.8638% ( 1) 00:18:27.965 19.058 - 19.153: 99.8940% ( 4) 00:18:27.965 19.342 - 19.437: 99.9016% ( 1) 00:18:27.965 19.627 - 19.721: 99.9092% ( 1) 00:18:27.965 20.290 - 20.385: 99.9167% ( 1) 00:18:27.965 3980.705 - 4004.978: 99.9849% ( 9) 00:18:27.965 4004.978 - 4029.250: 100.0000% ( 2) 00:18:27.965 00:18:27.965 Complete histogram 00:18:27.965 ================== 00:18:27.965 Range in us Cumulative Count 00:18:27.965 2.062 - 2.074: 4.3597% ( 576) 00:18:27.965 2.074 - 2.086: 38.8208% ( 4553) 00:18:27.965 2.086 - 2.098: 46.1020% ( 962) 00:18:27.965 2.098 - 2.110: 50.9688% ( 643) 00:18:27.965 2.110 - 2.121: 59.4611% ( 1122) 00:18:27.965 2.121 - 2.133: 61.1414% ( 222) 00:18:27.965 2.133 - 2.145: 68.5816% ( 983) 00:18:27.965 2.145 - 2.157: 80.5177% ( 1577) 00:18:27.965 2.157 - 2.169: 82.4099% ( 250) 00:18:27.965 2.169 - 2.181: 84.9152% ( 331) 00:18:27.965 2.181 - 2.193: 87.5946% ( 354) 00:18:27.965 2.193 - 2.204: 88.3591% ( 101) 00:18:27.965 2.204 - 2.216: 89.9561% ( 211) 00:18:27.965 2.216 - 2.228: 92.1965% ( 296) 00:18:27.965 2.228 - 2.240: 93.8692% ( 221) 00:18:27.965 2.240 - 2.252: 94.4899% ( 82) 00:18:27.965 2.252 - 2.264: 94.7850% ( 39) 00:18:27.965 2.264 - 2.276: 94.9970% ( 28) 00:18:27.965 2.276 - 2.287: 95.2240% ( 30) 00:18:27.965 2.287 - 2.299: 95.5344% ( 41) 00:18:27.965 2.299 - 2.311: 95.8371% ( 40) 00:18:27.965 2.311 - 2.323: 95.9734% ( 18) 00:18:27.966 2.323 - 2.335: 96.0112% ( 5) 00:18:27.966 2.335 - 2.347: 96.0642% ( 7) 00:18:27.966 2.347 - 2.359: 96.1247% ( 8) 00:18:27.966 2.359 - 2.370: 96.1701% ( 6) 00:18:27.966 2.370 - 2.382: 96.1853% ( 2) 00:18:27.966 2.382 - 2.394: 96.2383% ( 7) 00:18:27.966 2.394 - 2.406: 96.3669% ( 17) 00:18:27.966 2.406 - 2.418: 96.5032% ( 18) 00:18:27.966 2.418 - 2.430: 96.7151% ( 28) 00:18:27.966 2.430 - 2.441: 96.9043% ( 25) 00:18:27.966 2.441 - 2.453: 97.1541% ( 33) 00:18:27.966 2.453 - 2.465: 97.3736% ( 29) 00:18:27.966 2.465 - 2.477: 97.5477% ( 23) 00:18:27.966 2.477 - 2.489: 97.7823% ( 31) 00:18:27.966 2.489 - 2.501: 97.9640% ( 24) 00:18:27.966 2.501 - 2.513: 98.0321% ( 9) 00:18:27.966 2.513 - 2.524: 98.1381% ( 14) 00:18:27.966 2.524 - 2.536: 98.2137% ( 10) 00:18:27.966 2.536 - 2.548: 98.2440% ( 4) 00:18:27.966 2.548 - 2.560: 98.2970% ( 7) 00:18:27.966 2.560 - 2.572: 98.3197% ( 3) 00:18:27.966 2.584 - 2.596: 98.3273% ( 1) 00:18:27.966 2.607 - 2.619: 98.3424% ( 2) 00:18:27.966 2.619 - 2.631: 98.3500% ( 1) 00:18:27.966 2.631 - 2.643: 98.3576% ( 1) 00:18:27.966 2.702 - 2.714: 98.3651% ( 1) 00:18:27.966 2.714 - 2.726: 98.3727% ( 1) 00:18:27.966 2.750 - 2.761: 98.3803% ( 1) 00:18:27.966 2.773 - 2.785: 98.3878% ( 1) 00:18:27.966 2.797 - 2.809: 98.3954% ( 1) 00:18:27.966 2.809 - 2.821: 98.4030% ( 1) 00:18:27.966 2.916 - 2.927: 98.4105% ( 1) 00:18:27.966 2.999 - 3.010: 98.4181% ( 1) 00:18:27.966 3.413 - 3.437: 98.4257% ( 1) 00:18:27.966 3.437 - 3.461: 98.4408% ( 2) 00:18:27.966 3.508 - 3.532: 98.4787% ( 5) 00:18:27.966 3.556 - 3.579: 98.4938% ( 2) 00:18:27.966 3.579 - 3.603: 98.5089% ( 2) 00:18:27.966 3.650 - 3.674: 98.5165% ( 1) 00:18:27.966 3.721 - 3.745: 98.5241% ( 1) 00:18:27.966 3.769 - 3.793: 98.5392% ( 2) 00:18:27.966 3.793 - 3.816: 98.5468% ( 1) 00:18:27.966 3.816 - 3.840: 98.5619% ( 2) 00:18:27.966 3.840 - 3.864: 98.5846% ( 3) 00:18:27.966 3.887 - 3.911: 98.5998% ( 2) 00:18:27.966 3.959 - 3.982: 98.6149% ( 2) 00:18:27.966 4.006 - 4.030: 98.6225% ( 1) 00:18:27.966 4.077 - 4.101: 98.6300% ( 1) 00:18:27.966 4.196 - 4.219: 98.6376% ( 1) 00:18:27.966 4.219 - 4.243: 98.6452% ( 1) 00:18:27.966 5.736 - 5.760: 98.6527% ( 1) 00:18:27.966 5.831 - 5.855: 98.6603% ( 1) 00:18:27.966 5.950 - 5.973: 98.6679% ( 1) 00:18:27.966 6.210 - 6.258: 98.6754% ( 1) 00:18:27.966 6.732 - 6.779: 98.6830% ( 1) 00:18:27.966 6.779 - 6.827: 98.6906% ( 1) 00:18:27.966 6.874 - 6.921: 98.6982% ( 1) 00:18:27.966 7.016 - 7.064: 98.7057% ( 1) 00:18:27.966 7.159 - 7.206: 98.7133% ( 1) 00:18:27.966 7.348 - 7.396: 98.7209% ( 1) 00:18:27.966 7.443 - 7.490: 98.7284% ( 1) 00:18:27.966 7.822 - 7.870: 98.7360% ( 1) 00:18:27.966 7.964 - 8.012: 98.7511% ( 2) 00:18:27.966 8.059 - 8.107: 98.7587% ( 1) 00:18:27.966 8.486 - 8.533: 98.7663% ( 1) 00:18:27.966 8.913 - 8.960: 98.7738% ( 1) 00:18:27.966 10.193 - 10.240: 98.7890% ( 2) 00:18:27.966 10.809 - 10.856: 98.7965% ( 1) 00:18:27.966 11.330 - 11.378: 98.8041% ( 1) 00:18:27.966 13.274 - 13.369: 98.8117% ( 1) 00:18:27.966 13.464 - 13.559: 98.8193% ( 1) 00:18:27.966 15.455 - 15.550: 98.8268% ( 1) 00:18:27.966 15.550 - 15.644: 98.8344% ( 1) 00:18:27.966 15.644 - 15.739: 98.8722% ( 5) 00:18:27.966 15.739 - 15.834: 9[2024-11-26 18:14:15.756003] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:27.966 8.9101% ( 5) 00:18:27.966 15.834 - 15.929: 98.9404% ( 4) 00:18:27.966 15.929 - 16.024: 98.9858% ( 6) 00:18:27.966 16.024 - 16.119: 98.9933% ( 1) 00:18:27.966 16.119 - 16.213: 99.0312% ( 5) 00:18:27.966 16.213 - 16.308: 99.0690% ( 5) 00:18:27.966 16.308 - 16.403: 99.1144% ( 6) 00:18:27.966 16.403 - 16.498: 99.1371% ( 3) 00:18:27.966 16.593 - 16.687: 99.1674% ( 4) 00:18:27.966 16.687 - 16.782: 99.1977% ( 4) 00:18:27.966 16.782 - 16.877: 99.2204% ( 3) 00:18:27.966 16.877 - 16.972: 99.2734% ( 7) 00:18:27.966 16.972 - 17.067: 99.2885% ( 2) 00:18:27.966 17.067 - 17.161: 99.3037% ( 2) 00:18:27.966 17.161 - 17.256: 99.3188% ( 2) 00:18:27.966 17.256 - 17.351: 99.3264% ( 1) 00:18:27.966 17.351 - 17.446: 99.3339% ( 1) 00:18:27.966 17.446 - 17.541: 99.3415% ( 1) 00:18:27.966 17.920 - 18.015: 99.3491% ( 1) 00:18:27.966 20.764 - 20.859: 99.3566% ( 1) 00:18:27.966 3980.705 - 4004.978: 99.9243% ( 75) 00:18:27.966 4004.978 - 4029.250: 100.0000% ( 10) 00:18:27.966 00:18:27.966 18:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:27.966 18:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:27.966 18:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:27.966 18:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:27.966 18:14:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:28.224 [ 00:18:28.224 { 00:18:28.224 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:28.224 "subtype": "Discovery", 00:18:28.224 "listen_addresses": [], 00:18:28.224 "allow_any_host": true, 00:18:28.224 "hosts": [] 00:18:28.224 }, 00:18:28.224 { 00:18:28.224 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:28.224 "subtype": "NVMe", 00:18:28.224 "listen_addresses": [ 00:18:28.224 { 00:18:28.224 "trtype": "VFIOUSER", 00:18:28.224 "adrfam": "IPv4", 00:18:28.224 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:28.224 "trsvcid": "0" 00:18:28.224 } 00:18:28.224 ], 00:18:28.224 "allow_any_host": true, 00:18:28.224 "hosts": [], 00:18:28.224 "serial_number": "SPDK1", 00:18:28.224 "model_number": "SPDK bdev Controller", 00:18:28.224 "max_namespaces": 32, 00:18:28.224 "min_cntlid": 1, 00:18:28.224 "max_cntlid": 65519, 00:18:28.224 "namespaces": [ 00:18:28.224 { 00:18:28.224 "nsid": 1, 00:18:28.224 "bdev_name": "Malloc1", 00:18:28.224 "name": "Malloc1", 00:18:28.224 "nguid": "2C8B5B6D070B437C8796F3C71A9682CA", 00:18:28.224 "uuid": "2c8b5b6d-070b-437c-8796-f3c71a9682ca" 00:18:28.224 } 00:18:28.224 ] 00:18:28.224 }, 00:18:28.224 { 00:18:28.224 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:28.224 "subtype": "NVMe", 00:18:28.224 "listen_addresses": [ 00:18:28.224 { 00:18:28.224 "trtype": "VFIOUSER", 00:18:28.224 "adrfam": "IPv4", 00:18:28.224 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:28.224 "trsvcid": "0" 00:18:28.224 } 00:18:28.224 ], 00:18:28.224 "allow_any_host": true, 00:18:28.224 "hosts": [], 00:18:28.224 "serial_number": "SPDK2", 00:18:28.224 "model_number": "SPDK bdev Controller", 00:18:28.224 "max_namespaces": 32, 00:18:28.224 "min_cntlid": 1, 00:18:28.224 "max_cntlid": 65519, 00:18:28.224 "namespaces": [ 00:18:28.224 { 00:18:28.224 "nsid": 1, 00:18:28.224 "bdev_name": "Malloc2", 00:18:28.224 "name": "Malloc2", 00:18:28.224 "nguid": "D58BC14F2F0D4D9F82B6C6BC81E6C1F0", 00:18:28.224 "uuid": "d58bc14f-2f0d-4d9f-82b6-c6bc81e6c1f0" 00:18:28.224 } 00:18:28.224 ] 00:18:28.224 } 00:18:28.224 ] 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=587031 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:28.224 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:28.483 [2024-11-26 18:14:16.254780] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:28.483 Malloc3 00:18:28.483 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:28.740 [2024-11-26 18:14:16.679882] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:28.740 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:28.740 Asynchronous Event Request test 00:18:28.740 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:28.740 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:28.740 Registering asynchronous event callbacks... 00:18:28.740 Starting namespace attribute notice tests for all controllers... 00:18:28.740 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:28.740 aer_cb - Changed Namespace 00:18:28.740 Cleaning up... 00:18:28.998 [ 00:18:28.998 { 00:18:28.998 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:28.998 "subtype": "Discovery", 00:18:28.998 "listen_addresses": [], 00:18:28.998 "allow_any_host": true, 00:18:28.998 "hosts": [] 00:18:28.998 }, 00:18:28.998 { 00:18:28.998 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:28.998 "subtype": "NVMe", 00:18:28.998 "listen_addresses": [ 00:18:28.998 { 00:18:28.998 "trtype": "VFIOUSER", 00:18:28.998 "adrfam": "IPv4", 00:18:28.998 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:28.998 "trsvcid": "0" 00:18:28.998 } 00:18:28.998 ], 00:18:28.998 "allow_any_host": true, 00:18:28.998 "hosts": [], 00:18:28.998 "serial_number": "SPDK1", 00:18:28.998 "model_number": "SPDK bdev Controller", 00:18:28.998 "max_namespaces": 32, 00:18:28.998 "min_cntlid": 1, 00:18:28.998 "max_cntlid": 65519, 00:18:28.998 "namespaces": [ 00:18:28.998 { 00:18:28.998 "nsid": 1, 00:18:28.998 "bdev_name": "Malloc1", 00:18:28.998 "name": "Malloc1", 00:18:28.998 "nguid": "2C8B5B6D070B437C8796F3C71A9682CA", 00:18:28.998 "uuid": "2c8b5b6d-070b-437c-8796-f3c71a9682ca" 00:18:28.998 }, 00:18:28.998 { 00:18:28.998 "nsid": 2, 00:18:28.998 "bdev_name": "Malloc3", 00:18:28.998 "name": "Malloc3", 00:18:28.998 "nguid": "B610ADF868EC4D86AFFB0BF96394865B", 00:18:28.998 "uuid": "b610adf8-68ec-4d86-affb-0bf96394865b" 00:18:28.998 } 00:18:28.998 ] 00:18:28.998 }, 00:18:28.998 { 00:18:28.998 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:28.998 "subtype": "NVMe", 00:18:28.998 "listen_addresses": [ 00:18:28.998 { 00:18:28.998 "trtype": "VFIOUSER", 00:18:28.998 "adrfam": "IPv4", 00:18:28.998 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:28.998 "trsvcid": "0" 00:18:28.998 } 00:18:28.998 ], 00:18:28.998 "allow_any_host": true, 00:18:28.998 "hosts": [], 00:18:28.998 "serial_number": "SPDK2", 00:18:28.998 "model_number": "SPDK bdev Controller", 00:18:28.998 "max_namespaces": 32, 00:18:28.998 "min_cntlid": 1, 00:18:28.998 "max_cntlid": 65519, 00:18:28.998 "namespaces": [ 00:18:28.998 { 00:18:28.998 "nsid": 1, 00:18:28.998 "bdev_name": "Malloc2", 00:18:28.998 "name": "Malloc2", 00:18:28.998 "nguid": "D58BC14F2F0D4D9F82B6C6BC81E6C1F0", 00:18:28.998 "uuid": "d58bc14f-2f0d-4d9f-82b6-c6bc81e6c1f0" 00:18:28.998 } 00:18:28.998 ] 00:18:28.998 } 00:18:28.998 ] 00:18:28.998 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 587031 00:18:28.998 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:28.998 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:28.998 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:28.998 18:14:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:28.998 [2024-11-26 18:14:16.984765] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:18:28.998 [2024-11-26 18:14:16.984804] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid587169 ] 00:18:29.258 [2024-11-26 18:14:17.032099] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:29.258 [2024-11-26 18:14:17.044641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:29.258 [2024-11-26 18:14:17.044675] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4eaee1c000 00:18:29.258 [2024-11-26 18:14:17.045643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.258 [2024-11-26 18:14:17.046649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.258 [2024-11-26 18:14:17.047656] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.258 [2024-11-26 18:14:17.048661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:29.258 [2024-11-26 18:14:17.049670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:29.258 [2024-11-26 18:14:17.050676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.258 [2024-11-26 18:14:17.051686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:29.258 [2024-11-26 18:14:17.052699] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:29.258 [2024-11-26 18:14:17.053709] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:29.258 [2024-11-26 18:14:17.053731] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4eaee11000 00:18:29.258 [2024-11-26 18:14:17.054843] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:29.258 [2024-11-26 18:14:17.069567] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:29.258 [2024-11-26 18:14:17.069618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:29.258 [2024-11-26 18:14:17.071691] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:29.258 [2024-11-26 18:14:17.071743] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:29.258 [2024-11-26 18:14:17.071832] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:29.258 [2024-11-26 18:14:17.071854] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:29.258 [2024-11-26 18:14:17.071865] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:29.258 [2024-11-26 18:14:17.072698] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:29.258 [2024-11-26 18:14:17.072724] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:29.258 [2024-11-26 18:14:17.072738] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:29.258 [2024-11-26 18:14:17.073701] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:29.258 [2024-11-26 18:14:17.073722] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:29.258 [2024-11-26 18:14:17.073735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:29.258 [2024-11-26 18:14:17.074710] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:29.258 [2024-11-26 18:14:17.074731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:29.258 [2024-11-26 18:14:17.075712] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:29.258 [2024-11-26 18:14:17.075732] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:29.258 [2024-11-26 18:14:17.075741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:29.258 [2024-11-26 18:14:17.075753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:29.258 [2024-11-26 18:14:17.075862] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:29.258 [2024-11-26 18:14:17.075870] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:29.258 [2024-11-26 18:14:17.075878] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:29.258 [2024-11-26 18:14:17.076720] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:29.258 [2024-11-26 18:14:17.077726] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:29.258 [2024-11-26 18:14:17.078731] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:29.258 [2024-11-26 18:14:17.079733] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:29.258 [2024-11-26 18:14:17.079800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:29.258 [2024-11-26 18:14:17.080749] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:29.258 [2024-11-26 18:14:17.080769] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:29.258 [2024-11-26 18:14:17.080778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.080801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:29.259 [2024-11-26 18:14:17.080815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.080836] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:29.259 [2024-11-26 18:14:17.080846] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:29.259 [2024-11-26 18:14:17.080852] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.259 [2024-11-26 18:14:17.080868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.087316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.087338] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:29.259 [2024-11-26 18:14:17.087347] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:29.259 [2024-11-26 18:14:17.087355] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:29.259 [2024-11-26 18:14:17.087367] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:29.259 [2024-11-26 18:14:17.087375] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:29.259 [2024-11-26 18:14:17.087383] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:29.259 [2024-11-26 18:14:17.087391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.087404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.087420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.095316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.095340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.259 [2024-11-26 18:14:17.095353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.259 [2024-11-26 18:14:17.095365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.259 [2024-11-26 18:14:17.095377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:29.259 [2024-11-26 18:14:17.095386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.095403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.095418] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.103315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.103333] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:29.259 [2024-11-26 18:14:17.103342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.103357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.103368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.103382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.111328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.111408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.111426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.111440] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:29.259 [2024-11-26 18:14:17.111452] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:29.259 [2024-11-26 18:14:17.111459] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.259 [2024-11-26 18:14:17.111468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.119317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.119345] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:29.259 [2024-11-26 18:14:17.119361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.119375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.119388] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:29.259 [2024-11-26 18:14:17.119396] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:29.259 [2024-11-26 18:14:17.119402] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.259 [2024-11-26 18:14:17.119412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.127312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.127336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.127351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.127365] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:29.259 [2024-11-26 18:14:17.127373] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:29.259 [2024-11-26 18:14:17.127379] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.259 [2024-11-26 18:14:17.127389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.135312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.135339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.135353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.135366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.135376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.135385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.135393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.135401] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:29.259 [2024-11-26 18:14:17.135412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:29.259 [2024-11-26 18:14:17.135421] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:29.259 [2024-11-26 18:14:17.135445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.143315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.143341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.151312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.151338] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.159314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.159339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.167315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:29.259 [2024-11-26 18:14:17.167346] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:29.259 [2024-11-26 18:14:17.167358] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:29.259 [2024-11-26 18:14:17.167364] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:29.259 [2024-11-26 18:14:17.167370] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:29.259 [2024-11-26 18:14:17.167376] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:29.259 [2024-11-26 18:14:17.167385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:29.259 [2024-11-26 18:14:17.167397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:29.259 [2024-11-26 18:14:17.167405] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:29.259 [2024-11-26 18:14:17.167411] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.259 [2024-11-26 18:14:17.167420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:29.259 [2024-11-26 18:14:17.167431] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:29.259 [2024-11-26 18:14:17.167439] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:29.260 [2024-11-26 18:14:17.167445] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.260 [2024-11-26 18:14:17.167454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:29.260 [2024-11-26 18:14:17.167465] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:29.260 [2024-11-26 18:14:17.167473] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:29.260 [2024-11-26 18:14:17.167479] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:29.260 [2024-11-26 18:14:17.167488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:29.260 [2024-11-26 18:14:17.175317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:29.260 [2024-11-26 18:14:17.175345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:29.260 [2024-11-26 18:14:17.175363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:29.260 [2024-11-26 18:14:17.175376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:29.260 ===================================================== 00:18:29.260 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:29.260 ===================================================== 00:18:29.260 Controller Capabilities/Features 00:18:29.260 ================================ 00:18:29.260 Vendor ID: 4e58 00:18:29.260 Subsystem Vendor ID: 4e58 00:18:29.260 Serial Number: SPDK2 00:18:29.260 Model Number: SPDK bdev Controller 00:18:29.260 Firmware Version: 25.01 00:18:29.260 Recommended Arb Burst: 6 00:18:29.260 IEEE OUI Identifier: 8d 6b 50 00:18:29.260 Multi-path I/O 00:18:29.260 May have multiple subsystem ports: Yes 00:18:29.260 May have multiple controllers: Yes 00:18:29.260 Associated with SR-IOV VF: No 00:18:29.260 Max Data Transfer Size: 131072 00:18:29.260 Max Number of Namespaces: 32 00:18:29.260 Max Number of I/O Queues: 127 00:18:29.260 NVMe Specification Version (VS): 1.3 00:18:29.260 NVMe Specification Version (Identify): 1.3 00:18:29.260 Maximum Queue Entries: 256 00:18:29.260 Contiguous Queues Required: Yes 00:18:29.260 Arbitration Mechanisms Supported 00:18:29.260 Weighted Round Robin: Not Supported 00:18:29.260 Vendor Specific: Not Supported 00:18:29.260 Reset Timeout: 15000 ms 00:18:29.260 Doorbell Stride: 4 bytes 00:18:29.260 NVM Subsystem Reset: Not Supported 00:18:29.260 Command Sets Supported 00:18:29.260 NVM Command Set: Supported 00:18:29.260 Boot Partition: Not Supported 00:18:29.260 Memory Page Size Minimum: 4096 bytes 00:18:29.260 Memory Page Size Maximum: 4096 bytes 00:18:29.260 Persistent Memory Region: Not Supported 00:18:29.260 Optional Asynchronous Events Supported 00:18:29.260 Namespace Attribute Notices: Supported 00:18:29.260 Firmware Activation Notices: Not Supported 00:18:29.260 ANA Change Notices: Not Supported 00:18:29.260 PLE Aggregate Log Change Notices: Not Supported 00:18:29.260 LBA Status Info Alert Notices: Not Supported 00:18:29.260 EGE Aggregate Log Change Notices: Not Supported 00:18:29.260 Normal NVM Subsystem Shutdown event: Not Supported 00:18:29.260 Zone Descriptor Change Notices: Not Supported 00:18:29.260 Discovery Log Change Notices: Not Supported 00:18:29.260 Controller Attributes 00:18:29.260 128-bit Host Identifier: Supported 00:18:29.260 Non-Operational Permissive Mode: Not Supported 00:18:29.260 NVM Sets: Not Supported 00:18:29.260 Read Recovery Levels: Not Supported 00:18:29.260 Endurance Groups: Not Supported 00:18:29.260 Predictable Latency Mode: Not Supported 00:18:29.260 Traffic Based Keep ALive: Not Supported 00:18:29.260 Namespace Granularity: Not Supported 00:18:29.260 SQ Associations: Not Supported 00:18:29.260 UUID List: Not Supported 00:18:29.260 Multi-Domain Subsystem: Not Supported 00:18:29.260 Fixed Capacity Management: Not Supported 00:18:29.260 Variable Capacity Management: Not Supported 00:18:29.260 Delete Endurance Group: Not Supported 00:18:29.260 Delete NVM Set: Not Supported 00:18:29.260 Extended LBA Formats Supported: Not Supported 00:18:29.260 Flexible Data Placement Supported: Not Supported 00:18:29.260 00:18:29.260 Controller Memory Buffer Support 00:18:29.260 ================================ 00:18:29.260 Supported: No 00:18:29.260 00:18:29.260 Persistent Memory Region Support 00:18:29.260 ================================ 00:18:29.260 Supported: No 00:18:29.260 00:18:29.260 Admin Command Set Attributes 00:18:29.260 ============================ 00:18:29.260 Security Send/Receive: Not Supported 00:18:29.260 Format NVM: Not Supported 00:18:29.260 Firmware Activate/Download: Not Supported 00:18:29.260 Namespace Management: Not Supported 00:18:29.260 Device Self-Test: Not Supported 00:18:29.260 Directives: Not Supported 00:18:29.260 NVMe-MI: Not Supported 00:18:29.260 Virtualization Management: Not Supported 00:18:29.260 Doorbell Buffer Config: Not Supported 00:18:29.260 Get LBA Status Capability: Not Supported 00:18:29.260 Command & Feature Lockdown Capability: Not Supported 00:18:29.260 Abort Command Limit: 4 00:18:29.260 Async Event Request Limit: 4 00:18:29.260 Number of Firmware Slots: N/A 00:18:29.260 Firmware Slot 1 Read-Only: N/A 00:18:29.260 Firmware Activation Without Reset: N/A 00:18:29.260 Multiple Update Detection Support: N/A 00:18:29.260 Firmware Update Granularity: No Information Provided 00:18:29.260 Per-Namespace SMART Log: No 00:18:29.260 Asymmetric Namespace Access Log Page: Not Supported 00:18:29.260 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:29.260 Command Effects Log Page: Supported 00:18:29.260 Get Log Page Extended Data: Supported 00:18:29.260 Telemetry Log Pages: Not Supported 00:18:29.260 Persistent Event Log Pages: Not Supported 00:18:29.260 Supported Log Pages Log Page: May Support 00:18:29.260 Commands Supported & Effects Log Page: Not Supported 00:18:29.260 Feature Identifiers & Effects Log Page:May Support 00:18:29.260 NVMe-MI Commands & Effects Log Page: May Support 00:18:29.260 Data Area 4 for Telemetry Log: Not Supported 00:18:29.260 Error Log Page Entries Supported: 128 00:18:29.260 Keep Alive: Supported 00:18:29.260 Keep Alive Granularity: 10000 ms 00:18:29.260 00:18:29.260 NVM Command Set Attributes 00:18:29.260 ========================== 00:18:29.260 Submission Queue Entry Size 00:18:29.260 Max: 64 00:18:29.260 Min: 64 00:18:29.260 Completion Queue Entry Size 00:18:29.260 Max: 16 00:18:29.260 Min: 16 00:18:29.260 Number of Namespaces: 32 00:18:29.260 Compare Command: Supported 00:18:29.260 Write Uncorrectable Command: Not Supported 00:18:29.260 Dataset Management Command: Supported 00:18:29.260 Write Zeroes Command: Supported 00:18:29.260 Set Features Save Field: Not Supported 00:18:29.260 Reservations: Not Supported 00:18:29.260 Timestamp: Not Supported 00:18:29.260 Copy: Supported 00:18:29.260 Volatile Write Cache: Present 00:18:29.260 Atomic Write Unit (Normal): 1 00:18:29.260 Atomic Write Unit (PFail): 1 00:18:29.260 Atomic Compare & Write Unit: 1 00:18:29.260 Fused Compare & Write: Supported 00:18:29.260 Scatter-Gather List 00:18:29.260 SGL Command Set: Supported (Dword aligned) 00:18:29.260 SGL Keyed: Not Supported 00:18:29.260 SGL Bit Bucket Descriptor: Not Supported 00:18:29.260 SGL Metadata Pointer: Not Supported 00:18:29.260 Oversized SGL: Not Supported 00:18:29.260 SGL Metadata Address: Not Supported 00:18:29.260 SGL Offset: Not Supported 00:18:29.260 Transport SGL Data Block: Not Supported 00:18:29.260 Replay Protected Memory Block: Not Supported 00:18:29.260 00:18:29.260 Firmware Slot Information 00:18:29.260 ========================= 00:18:29.260 Active slot: 1 00:18:29.260 Slot 1 Firmware Revision: 25.01 00:18:29.260 00:18:29.260 00:18:29.260 Commands Supported and Effects 00:18:29.260 ============================== 00:18:29.260 Admin Commands 00:18:29.260 -------------- 00:18:29.260 Get Log Page (02h): Supported 00:18:29.260 Identify (06h): Supported 00:18:29.260 Abort (08h): Supported 00:18:29.260 Set Features (09h): Supported 00:18:29.260 Get Features (0Ah): Supported 00:18:29.260 Asynchronous Event Request (0Ch): Supported 00:18:29.260 Keep Alive (18h): Supported 00:18:29.260 I/O Commands 00:18:29.260 ------------ 00:18:29.260 Flush (00h): Supported LBA-Change 00:18:29.260 Write (01h): Supported LBA-Change 00:18:29.260 Read (02h): Supported 00:18:29.260 Compare (05h): Supported 00:18:29.260 Write Zeroes (08h): Supported LBA-Change 00:18:29.260 Dataset Management (09h): Supported LBA-Change 00:18:29.260 Copy (19h): Supported LBA-Change 00:18:29.260 00:18:29.260 Error Log 00:18:29.260 ========= 00:18:29.260 00:18:29.260 Arbitration 00:18:29.260 =========== 00:18:29.260 Arbitration Burst: 1 00:18:29.260 00:18:29.260 Power Management 00:18:29.260 ================ 00:18:29.260 Number of Power States: 1 00:18:29.260 Current Power State: Power State #0 00:18:29.260 Power State #0: 00:18:29.261 Max Power: 0.00 W 00:18:29.261 Non-Operational State: Operational 00:18:29.261 Entry Latency: Not Reported 00:18:29.261 Exit Latency: Not Reported 00:18:29.261 Relative Read Throughput: 0 00:18:29.261 Relative Read Latency: 0 00:18:29.261 Relative Write Throughput: 0 00:18:29.261 Relative Write Latency: 0 00:18:29.261 Idle Power: Not Reported 00:18:29.261 Active Power: Not Reported 00:18:29.261 Non-Operational Permissive Mode: Not Supported 00:18:29.261 00:18:29.261 Health Information 00:18:29.261 ================== 00:18:29.261 Critical Warnings: 00:18:29.261 Available Spare Space: OK 00:18:29.261 Temperature: OK 00:18:29.261 Device Reliability: OK 00:18:29.261 Read Only: No 00:18:29.261 Volatile Memory Backup: OK 00:18:29.261 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:29.261 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:29.261 Available Spare: 0% 00:18:29.261 Available Sp[2024-11-26 18:14:17.175492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:29.261 [2024-11-26 18:14:17.183333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:29.261 [2024-11-26 18:14:17.183381] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:29.261 [2024-11-26 18:14:17.183399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.261 [2024-11-26 18:14:17.183410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.261 [2024-11-26 18:14:17.183420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.261 [2024-11-26 18:14:17.183429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:29.261 [2024-11-26 18:14:17.183509] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:29.261 [2024-11-26 18:14:17.183531] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:29.261 [2024-11-26 18:14:17.184515] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:29.261 [2024-11-26 18:14:17.184587] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:29.261 [2024-11-26 18:14:17.184615] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:29.261 [2024-11-26 18:14:17.187313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:29.261 [2024-11-26 18:14:17.187338] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 2 milliseconds 00:18:29.261 [2024-11-26 18:14:17.187389] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:29.261 [2024-11-26 18:14:17.188579] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:29.261 are Threshold: 0% 00:18:29.261 Life Percentage Used: 0% 00:18:29.261 Data Units Read: 0 00:18:29.261 Data Units Written: 0 00:18:29.261 Host Read Commands: 0 00:18:29.261 Host Write Commands: 0 00:18:29.261 Controller Busy Time: 0 minutes 00:18:29.261 Power Cycles: 0 00:18:29.261 Power On Hours: 0 hours 00:18:29.261 Unsafe Shutdowns: 0 00:18:29.261 Unrecoverable Media Errors: 0 00:18:29.261 Lifetime Error Log Entries: 0 00:18:29.261 Warning Temperature Time: 0 minutes 00:18:29.261 Critical Temperature Time: 0 minutes 00:18:29.261 00:18:29.261 Number of Queues 00:18:29.261 ================ 00:18:29.261 Number of I/O Submission Queues: 127 00:18:29.261 Number of I/O Completion Queues: 127 00:18:29.261 00:18:29.261 Active Namespaces 00:18:29.261 ================= 00:18:29.261 Namespace ID:1 00:18:29.261 Error Recovery Timeout: Unlimited 00:18:29.261 Command Set Identifier: NVM (00h) 00:18:29.261 Deallocate: Supported 00:18:29.261 Deallocated/Unwritten Error: Not Supported 00:18:29.261 Deallocated Read Value: Unknown 00:18:29.261 Deallocate in Write Zeroes: Not Supported 00:18:29.261 Deallocated Guard Field: 0xFFFF 00:18:29.261 Flush: Supported 00:18:29.261 Reservation: Supported 00:18:29.261 Namespace Sharing Capabilities: Multiple Controllers 00:18:29.261 Size (in LBAs): 131072 (0GiB) 00:18:29.261 Capacity (in LBAs): 131072 (0GiB) 00:18:29.261 Utilization (in LBAs): 131072 (0GiB) 00:18:29.261 NGUID: D58BC14F2F0D4D9F82B6C6BC81E6C1F0 00:18:29.261 UUID: d58bc14f-2f0d-4d9f-82b6-c6bc81e6c1f0 00:18:29.261 Thin Provisioning: Not Supported 00:18:29.261 Per-NS Atomic Units: Yes 00:18:29.261 Atomic Boundary Size (Normal): 0 00:18:29.261 Atomic Boundary Size (PFail): 0 00:18:29.261 Atomic Boundary Offset: 0 00:18:29.261 Maximum Single Source Range Length: 65535 00:18:29.261 Maximum Copy Length: 65535 00:18:29.261 Maximum Source Range Count: 1 00:18:29.261 NGUID/EUI64 Never Reused: No 00:18:29.261 Namespace Write Protected: No 00:18:29.261 Number of LBA Formats: 1 00:18:29.261 Current LBA Format: LBA Format #00 00:18:29.261 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:29.261 00:18:29.261 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:29.519 [2024-11-26 18:14:17.442119] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:34.790 Initializing NVMe Controllers 00:18:34.790 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:34.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:34.790 Initialization complete. Launching workers. 00:18:34.790 ======================================================== 00:18:34.790 Latency(us) 00:18:34.790 Device Information : IOPS MiB/s Average min max 00:18:34.790 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33006.83 128.93 3877.41 1186.04 11575.88 00:18:34.790 ======================================================== 00:18:34.790 Total : 33006.83 128.93 3877.41 1186.04 11575.88 00:18:34.790 00:18:34.790 [2024-11-26 18:14:22.544677] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:34.790 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:35.047 [2024-11-26 18:14:22.811499] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:40.301 Initializing NVMe Controllers 00:18:40.301 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:40.301 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:40.301 Initialization complete. Launching workers. 00:18:40.301 ======================================================== 00:18:40.301 Latency(us) 00:18:40.301 Device Information : IOPS MiB/s Average min max 00:18:40.301 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29540.94 115.39 4332.31 1255.94 7543.49 00:18:40.301 ======================================================== 00:18:40.301 Total : 29540.94 115.39 4332.31 1255.94 7543.49 00:18:40.301 00:18:40.301 [2024-11-26 18:14:27.831634] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:40.301 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:40.301 [2024-11-26 18:14:28.065567] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:45.562 [2024-11-26 18:14:33.202443] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:45.562 Initializing NVMe Controllers 00:18:45.562 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:45.562 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:45.562 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:45.562 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:45.562 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:45.562 Initialization complete. Launching workers. 00:18:45.562 Starting thread on core 2 00:18:45.562 Starting thread on core 3 00:18:45.562 Starting thread on core 1 00:18:45.562 18:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:45.562 [2024-11-26 18:14:33.523423] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:48.908 [2024-11-26 18:14:36.592047] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:48.908 Initializing NVMe Controllers 00:18:48.908 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:48.908 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:48.908 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:48.908 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:48.908 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:48.908 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:48.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:48.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:48.908 Initialization complete. Launching workers. 00:18:48.908 Starting thread on core 1 with urgent priority queue 00:18:48.908 Starting thread on core 2 with urgent priority queue 00:18:48.908 Starting thread on core 3 with urgent priority queue 00:18:48.908 Starting thread on core 0 with urgent priority queue 00:18:48.908 SPDK bdev Controller (SPDK2 ) core 0: 4992.67 IO/s 20.03 secs/100000 ios 00:18:48.908 SPDK bdev Controller (SPDK2 ) core 1: 5450.33 IO/s 18.35 secs/100000 ios 00:18:48.908 SPDK bdev Controller (SPDK2 ) core 2: 5202.33 IO/s 19.22 secs/100000 ios 00:18:48.908 SPDK bdev Controller (SPDK2 ) core 3: 5675.33 IO/s 17.62 secs/100000 ios 00:18:48.908 ======================================================== 00:18:48.908 00:18:48.908 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:48.908 [2024-11-26 18:14:36.916814] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:49.166 Initializing NVMe Controllers 00:18:49.166 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:49.166 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:49.166 Namespace ID: 1 size: 0GB 00:18:49.166 Initialization complete. 00:18:49.166 INFO: using host memory buffer for IO 00:18:49.166 Hello world! 00:18:49.166 [2024-11-26 18:14:36.930006] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:49.166 18:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:49.424 [2024-11-26 18:14:37.251706] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:50.358 Initializing NVMe Controllers 00:18:50.358 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:50.358 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:50.358 Initialization complete. Launching workers. 00:18:50.358 submit (in ns) avg, min, max = 8890.3, 3533.3, 4018197.8 00:18:50.358 complete (in ns) avg, min, max = 25903.1, 2046.7, 4018305.6 00:18:50.358 00:18:50.358 Submit histogram 00:18:50.358 ================ 00:18:50.358 Range in us Cumulative Count 00:18:50.358 3.532 - 3.556: 1.2679% ( 165) 00:18:50.358 3.556 - 3.579: 6.2241% ( 645) 00:18:50.358 3.579 - 3.603: 15.3757% ( 1191) 00:18:50.358 3.603 - 3.627: 25.1422% ( 1271) 00:18:50.358 3.627 - 3.650: 35.6309% ( 1365) 00:18:50.358 3.650 - 3.674: 42.8462% ( 939) 00:18:50.358 3.674 - 3.698: 49.4237% ( 856) 00:18:50.358 3.698 - 3.721: 54.5259% ( 664) 00:18:50.358 3.721 - 3.745: 58.4217% ( 507) 00:18:50.358 3.745 - 3.769: 61.3339% ( 379) 00:18:50.358 3.769 - 3.793: 64.3845% ( 397) 00:18:50.358 3.793 - 3.816: 67.9883% ( 469) 00:18:50.358 3.816 - 3.840: 72.8139% ( 628) 00:18:50.358 3.840 - 3.864: 77.4089% ( 598) 00:18:50.358 3.864 - 3.887: 81.3201% ( 509) 00:18:50.358 3.887 - 3.911: 84.5781% ( 424) 00:18:50.358 3.911 - 3.935: 86.8603% ( 297) 00:18:50.358 3.935 - 3.959: 88.3817% ( 198) 00:18:50.358 3.959 - 3.982: 89.6727% ( 168) 00:18:50.358 3.982 - 4.006: 90.7407% ( 139) 00:18:50.358 4.006 - 4.030: 91.6936% ( 124) 00:18:50.358 4.030 - 4.053: 92.5926% ( 117) 00:18:50.358 4.053 - 4.077: 93.4916% ( 117) 00:18:50.358 4.077 - 4.101: 94.3446% ( 111) 00:18:50.358 4.101 - 4.124: 94.9285% ( 76) 00:18:50.358 4.124 - 4.148: 95.2820% ( 46) 00:18:50.358 4.148 - 4.172: 95.6432% ( 47) 00:18:50.358 4.172 - 4.196: 95.8737% ( 30) 00:18:50.358 4.196 - 4.219: 96.0197% ( 19) 00:18:50.358 4.219 - 4.243: 96.2195% ( 26) 00:18:50.358 4.243 - 4.267: 96.2886% ( 9) 00:18:50.358 4.267 - 4.290: 96.3655% ( 10) 00:18:50.358 4.290 - 4.314: 96.4653% ( 13) 00:18:50.358 4.314 - 4.338: 96.5268% ( 8) 00:18:50.358 4.338 - 4.361: 96.6113% ( 11) 00:18:50.358 4.361 - 4.385: 96.6498% ( 5) 00:18:50.358 4.385 - 4.409: 96.6882% ( 5) 00:18:50.358 4.409 - 4.433: 96.7343% ( 6) 00:18:50.358 4.433 - 4.456: 96.7727% ( 5) 00:18:50.358 4.456 - 4.480: 96.7881% ( 2) 00:18:50.358 4.504 - 4.527: 96.7958% ( 1) 00:18:50.358 4.551 - 4.575: 96.8188% ( 3) 00:18:50.358 4.575 - 4.599: 96.8419% ( 3) 00:18:50.358 4.622 - 4.646: 96.8495% ( 1) 00:18:50.358 4.670 - 4.693: 96.8649% ( 2) 00:18:50.358 4.693 - 4.717: 96.8957% ( 4) 00:18:50.358 4.717 - 4.741: 96.9187% ( 3) 00:18:50.358 4.741 - 4.764: 96.9418% ( 3) 00:18:50.358 4.764 - 4.788: 97.0032% ( 8) 00:18:50.358 4.788 - 4.812: 97.0340% ( 4) 00:18:50.358 4.812 - 4.836: 97.0724% ( 5) 00:18:50.358 4.836 - 4.859: 97.1262% ( 7) 00:18:50.358 4.859 - 4.883: 97.1646% ( 5) 00:18:50.358 4.883 - 4.907: 97.2107% ( 6) 00:18:50.358 4.907 - 4.930: 97.2491% ( 5) 00:18:50.358 4.930 - 4.954: 97.2722% ( 3) 00:18:50.358 4.954 - 4.978: 97.3260% ( 7) 00:18:50.358 4.978 - 5.001: 97.3644% ( 5) 00:18:50.358 5.001 - 5.025: 97.4105% ( 6) 00:18:50.358 5.025 - 5.049: 97.4720% ( 8) 00:18:50.358 5.049 - 5.073: 97.5257% ( 7) 00:18:50.358 5.096 - 5.120: 97.5642% ( 5) 00:18:50.358 5.120 - 5.144: 97.5718% ( 1) 00:18:50.358 5.144 - 5.167: 97.5872% ( 2) 00:18:50.358 5.167 - 5.191: 97.6103% ( 3) 00:18:50.358 5.191 - 5.215: 97.6179% ( 1) 00:18:50.358 5.239 - 5.262: 97.6256% ( 1) 00:18:50.358 5.286 - 5.310: 97.6333% ( 1) 00:18:50.358 5.333 - 5.357: 97.6487% ( 2) 00:18:50.358 5.357 - 5.381: 97.6564% ( 1) 00:18:50.358 5.428 - 5.452: 97.6641% ( 1) 00:18:50.358 5.499 - 5.523: 97.6717% ( 1) 00:18:50.358 5.570 - 5.594: 97.6794% ( 1) 00:18:50.358 5.665 - 5.689: 97.6871% ( 1) 00:18:50.358 5.760 - 5.784: 97.6948% ( 1) 00:18:50.358 5.784 - 5.807: 97.7025% ( 1) 00:18:50.358 5.950 - 5.973: 97.7102% ( 1) 00:18:50.358 5.973 - 5.997: 97.7178% ( 1) 00:18:50.358 5.997 - 6.021: 97.7409% ( 3) 00:18:50.358 6.044 - 6.068: 97.7486% ( 1) 00:18:50.358 6.068 - 6.116: 97.7563% ( 1) 00:18:50.358 6.116 - 6.163: 97.7639% ( 1) 00:18:50.358 6.163 - 6.210: 97.7793% ( 2) 00:18:50.358 6.210 - 6.258: 97.7870% ( 1) 00:18:50.358 6.305 - 6.353: 97.8024% ( 2) 00:18:50.358 6.353 - 6.400: 97.8177% ( 2) 00:18:50.358 6.400 - 6.447: 97.8331% ( 2) 00:18:50.358 6.447 - 6.495: 97.8485% ( 2) 00:18:50.358 6.542 - 6.590: 97.8562% ( 1) 00:18:50.358 6.590 - 6.637: 97.8792% ( 3) 00:18:50.358 6.874 - 6.921: 97.8946% ( 2) 00:18:50.358 6.969 - 7.016: 97.9023% ( 1) 00:18:50.358 7.016 - 7.064: 97.9176% ( 2) 00:18:50.358 7.111 - 7.159: 97.9253% ( 1) 00:18:50.358 7.206 - 7.253: 97.9407% ( 2) 00:18:50.358 7.301 - 7.348: 97.9484% ( 1) 00:18:50.358 7.443 - 7.490: 97.9560% ( 1) 00:18:50.358 7.585 - 7.633: 97.9637% ( 1) 00:18:50.358 7.775 - 7.822: 97.9868% ( 3) 00:18:50.358 7.822 - 7.870: 97.9945% ( 1) 00:18:50.358 7.964 - 8.012: 98.0175% ( 3) 00:18:50.358 8.012 - 8.059: 98.0329% ( 2) 00:18:50.358 8.107 - 8.154: 98.0406% ( 1) 00:18:50.358 8.154 - 8.201: 98.0483% ( 1) 00:18:50.358 8.201 - 8.249: 98.0559% ( 1) 00:18:50.358 8.249 - 8.296: 98.0713% ( 2) 00:18:50.358 8.296 - 8.344: 98.0867% ( 2) 00:18:50.358 8.344 - 8.391: 98.1020% ( 2) 00:18:50.358 8.439 - 8.486: 98.1097% ( 1) 00:18:50.358 8.533 - 8.581: 98.1251% ( 2) 00:18:50.358 8.581 - 8.628: 98.1328% ( 1) 00:18:50.358 8.628 - 8.676: 98.1405% ( 1) 00:18:50.359 8.676 - 8.723: 98.1481% ( 1) 00:18:50.359 8.723 - 8.770: 98.1635% ( 2) 00:18:50.359 8.913 - 8.960: 98.1712% ( 1) 00:18:50.359 9.007 - 9.055: 98.1943% ( 3) 00:18:50.359 9.055 - 9.102: 98.2019% ( 1) 00:18:50.359 9.102 - 9.150: 98.2173% ( 2) 00:18:50.359 9.150 - 9.197: 98.2404% ( 3) 00:18:50.359 9.244 - 9.292: 98.2557% ( 2) 00:18:50.359 9.387 - 9.434: 98.2634% ( 1) 00:18:50.359 9.434 - 9.481: 98.2711% ( 1) 00:18:50.359 9.576 - 9.624: 98.2865% ( 2) 00:18:50.359 9.624 - 9.671: 98.2941% ( 1) 00:18:50.359 9.671 - 9.719: 98.3095% ( 2) 00:18:50.359 9.719 - 9.766: 98.3172% ( 1) 00:18:50.359 9.766 - 9.813: 98.3326% ( 2) 00:18:50.359 9.861 - 9.908: 98.3402% ( 1) 00:18:50.359 9.908 - 9.956: 98.3479% ( 1) 00:18:50.359 10.240 - 10.287: 98.3633% ( 2) 00:18:50.359 10.287 - 10.335: 98.3864% ( 3) 00:18:50.359 10.477 - 10.524: 98.3940% ( 1) 00:18:50.359 10.856 - 10.904: 98.4017% ( 1) 00:18:50.359 10.904 - 10.951: 98.4094% ( 1) 00:18:50.359 11.046 - 11.093: 98.4171% ( 1) 00:18:50.359 11.141 - 11.188: 98.4248% ( 1) 00:18:50.359 11.473 - 11.520: 98.4401% ( 2) 00:18:50.359 11.567 - 11.615: 98.4555% ( 2) 00:18:50.359 11.710 - 11.757: 98.4632% ( 1) 00:18:50.359 11.804 - 11.852: 98.4709% ( 1) 00:18:50.359 11.899 - 11.947: 98.4786% ( 1) 00:18:50.359 12.089 - 12.136: 98.4862% ( 1) 00:18:50.359 12.136 - 12.231: 98.4939% ( 1) 00:18:50.359 12.421 - 12.516: 98.5016% ( 1) 00:18:50.359 12.516 - 12.610: 98.5093% ( 1) 00:18:50.359 12.800 - 12.895: 98.5170% ( 1) 00:18:50.359 12.895 - 12.990: 98.5247% ( 1) 00:18:50.359 12.990 - 13.084: 98.5323% ( 1) 00:18:50.359 13.084 - 13.179: 98.5400% ( 1) 00:18:50.359 13.274 - 13.369: 98.5554% ( 2) 00:18:50.359 13.369 - 13.464: 98.5631% ( 1) 00:18:50.359 13.464 - 13.559: 98.5861% ( 3) 00:18:50.359 13.559 - 13.653: 98.6092% ( 3) 00:18:50.359 13.653 - 13.748: 98.6169% ( 1) 00:18:50.359 14.033 - 14.127: 98.6246% ( 1) 00:18:50.359 14.127 - 14.222: 98.6399% ( 2) 00:18:50.359 14.222 - 14.317: 98.6476% ( 1) 00:18:50.359 14.317 - 14.412: 98.6553% ( 1) 00:18:50.359 14.507 - 14.601: 98.6630% ( 1) 00:18:50.359 14.601 - 14.696: 98.6783% ( 2) 00:18:50.359 14.981 - 15.076: 98.6860% ( 1) 00:18:50.359 15.265 - 15.360: 98.6937% ( 1) 00:18:50.359 15.455 - 15.550: 98.7014% ( 1) 00:18:50.359 17.161 - 17.256: 98.7168% ( 2) 00:18:50.359 17.256 - 17.351: 98.7398% ( 3) 00:18:50.359 17.351 - 17.446: 98.7629% ( 3) 00:18:50.359 17.446 - 17.541: 98.7859% ( 3) 00:18:50.359 17.541 - 17.636: 98.8320% ( 6) 00:18:50.359 17.636 - 17.730: 98.9089% ( 10) 00:18:50.359 17.730 - 17.825: 98.9473% ( 5) 00:18:50.359 17.825 - 17.920: 99.0088% ( 8) 00:18:50.359 17.920 - 18.015: 99.0856% ( 10) 00:18:50.359 18.015 - 18.110: 99.1471% ( 8) 00:18:50.359 18.110 - 18.204: 99.1932% ( 6) 00:18:50.359 18.204 - 18.299: 99.2700% ( 10) 00:18:50.359 18.299 - 18.394: 99.3238% ( 7) 00:18:50.359 18.394 - 18.489: 99.4160% ( 12) 00:18:50.359 18.489 - 18.584: 99.5159% ( 13) 00:18:50.359 18.584 - 18.679: 99.5466% ( 4) 00:18:50.359 18.679 - 18.773: 99.5697% ( 3) 00:18:50.359 18.773 - 18.868: 99.5774% ( 1) 00:18:50.359 18.868 - 18.963: 99.6081% ( 4) 00:18:50.359 18.963 - 19.058: 99.6465% ( 5) 00:18:50.359 19.058 - 19.153: 99.6542% ( 1) 00:18:50.359 19.342 - 19.437: 99.6619% ( 1) 00:18:50.359 19.627 - 19.721: 99.6773% ( 2) 00:18:50.359 20.575 - 20.670: 99.6850% ( 1) 00:18:50.359 20.764 - 20.859: 99.7003% ( 2) 00:18:50.359 20.954 - 21.049: 99.7080% ( 1) 00:18:50.359 21.428 - 21.523: 99.7157% ( 1) 00:18:50.359 21.997 - 22.092: 99.7234% ( 1) 00:18:50.359 22.376 - 22.471: 99.7311% ( 1) 00:18:50.359 22.661 - 22.756: 99.7387% ( 1) 00:18:50.359 23.040 - 23.135: 99.7464% ( 1) 00:18:50.359 23.135 - 23.230: 99.7618% ( 2) 00:18:50.359 23.514 - 23.609: 99.7772% ( 2) 00:18:50.359 23.609 - 23.704: 99.7848% ( 1) 00:18:50.359 24.462 - 24.652: 99.7925% ( 1) 00:18:50.359 25.031 - 25.221: 99.8002% ( 1) 00:18:50.359 25.410 - 25.600: 99.8079% ( 1) 00:18:50.359 26.169 - 26.359: 99.8156% ( 1) 00:18:50.359 28.255 - 28.444: 99.8233% ( 1) 00:18:50.359 28.824 - 29.013: 99.8386% ( 2) 00:18:50.359 29.013 - 29.203: 99.8463% ( 1) 00:18:50.359 29.393 - 29.582: 99.8540% ( 1) 00:18:50.359 29.961 - 30.151: 99.8617% ( 1) 00:18:50.359 34.323 - 34.513: 99.8694% ( 1) 00:18:50.359 39.822 - 40.012: 99.8771% ( 1) 00:18:50.359 3665.161 - 3689.434: 99.8847% ( 1) 00:18:50.359 3980.705 - 4004.978: 99.9616% ( 10) 00:18:50.359 4004.978 - 4029.250: 100.0000% ( 5) 00:18:50.359 00:18:50.359 Complete histogram 00:18:50.359 ================== 00:18:50.359 Range in us Cumulative Count 00:18:50.359 2.039 - 2.050: 0.3074% ( 40) 00:18:50.359 2.050 - 2.062: 20.6931% ( 2653) 00:18:50.359 2.062 - 2.074: 42.3160% ( 2814) 00:18:50.359 2.074 - 2.086: 44.7825% ( 321) 00:18:50.359 2.086 - 2.098: 54.3953% ( 1251) 00:18:50.359 2.098 - 2.110: 59.1056% ( 613) 00:18:50.359 2.110 - 2.121: 61.9103% ( 365) 00:18:50.359 2.121 - 2.133: 71.4154% ( 1237) 00:18:50.359 2.133 - 2.145: 75.0961% ( 479) 00:18:50.359 2.145 - 2.157: 76.2794% ( 154) 00:18:50.359 2.157 - 2.169: 79.0841% ( 365) 00:18:50.359 2.169 - 2.181: 80.0369% ( 124) 00:18:50.359 2.181 - 2.193: 81.4277% ( 181) 00:18:50.359 2.193 - 2.204: 86.1226% ( 611) 00:18:50.359 2.204 - 2.216: 88.5969% ( 322) 00:18:50.359 2.216 - 2.228: 90.6024% ( 261) 00:18:50.359 2.228 - 2.240: 92.5158% ( 249) 00:18:50.359 2.240 - 2.252: 93.2150% ( 91) 00:18:50.359 2.252 - 2.264: 93.5531% ( 44) 00:18:50.359 2.264 - 2.276: 93.9066% ( 46) 00:18:50.359 2.276 - 2.287: 94.4829% ( 75) 00:18:50.359 2.287 - 2.299: 95.0515% ( 74) 00:18:50.359 2.299 - 2.311: 95.2666% ( 28) 00:18:50.359 2.311 - 2.323: 95.3512% ( 11) 00:18:50.359 2.323 - 2.335: 95.3973% ( 6) 00:18:50.359 2.335 - 2.347: 95.4434% ( 6) 00:18:50.359 2.347 - 2.359: 95.5970% ( 20) 00:18:50.359 2.359 - 2.370: 96.0043% ( 53) 00:18:50.359 2.370 - 2.382: 96.4346% ( 56) 00:18:50.359 2.382 - 2.394: 96.7343% ( 39) 00:18:50.359 2.394 - 2.406: 96.9955% ( 34) 00:18:50.359 2.406 - 2.418: 97.1339% ( 18) 00:18:50.359 2.418 - 2.430: 97.2799% ( 19) 00:18:50.359 2.430 - 2.441: 97.4258% ( 19) 00:18:50.359 2.441 - 2.453: 97.6103% ( 24) 00:18:50.359 2.453 - 2.465: 97.7793% ( 22) 00:18:50.359 2.465 - 2.477: 97.9407% ( 21) 00:18:50.359 2.477 - 2.489: 98.0406% ( 13) 00:18:50.359 2.489 - 2.501: 98.1174% ( 10) 00:18:50.359 2.501 - 2.513: 98.1635% ( 6) 00:18:50.359 2.513 - 2.524: 98.2019% ( 5) 00:18:50.359 2.524 - 2.536: 98.2327% ( 4) 00:18:50.359 2.536 - 2.548: 98.2480% ( 2) 00:18:50.359 2.548 - 2.560: 98.2634% ( 2) 00:18:50.359 2.560 - 2.572: 98.2711% ( 1) 00:18:50.359 2.584 - 2.596: 98.2788% ( 1) 00:18:50.359 2.607 - 2.619: 98.2941% ( 2) 00:18:50.359 2.619 - 2.631: 98.3018% ( 1) 00:18:50.359 2.631 - 2.643: 98.3172% ( 2) 00:18:50.359 2.655 - 2.667: 98.3249% ( 1) 00:18:50.359 2.679 - 2.690: 98.3326% ( 1) 00:18:50.359 2.714 - 2.726: 98.3479% ( 2) 00:18:50.359 2.785 - 2.797: 98.3556% ( 1) 00:18:50.359 2.868 - 2.880: 98.3633% ( 1) 00:18:50.359 2.939 - 2.951: 98.3710% ( 1) 00:18:50.359 3.461 - 3.484: 98.3940% ( 3) 00:18:50.359 3.532 - 3.556: 98.4017% ( 1) 00:18:50.359 3.603 - 3.627: 98.4094% ( 1) 00:18:50.359 3.627 - 3.650: 98.4248% ( 2) 00:18:50.359 3.650 - 3.674: 98.4401% ( 2) 00:18:50.359 3.674 - 3.698: 98.4555% ( 2) 00:18:50.359 3.698 - 3.721: 98.4709% ( 2) 00:18:50.359 3.721 - 3.745: 98.4862% ( 2) 00:18:50.359 3.745 - 3.769: 98.5016% ( 2) 00:18:50.359 3.769 - 3.793: 98.5170% ( 2) 00:18:50.359 3.793 - 3.816: 98.5247% ( 1) 00:18:50.359 3.816 - 3.840: 98.5400% ( 2) 00:18:50.359 3.864 - 3.887: 98.5477% ( 1) 00:18:50.359 3.911 - 3.935: 98.5554% ( 1) 00:18:50.359 3.982 - 4.006: 98.5631% ( 1) 00:18:50.359 4.077 - 4.101: 98.5708% ( 1) 00:18:50.359 4.148 - 4.172: 98.5785% ( 1) 00:18:50.359 4.219 - 4.243: 98.5861% ( 1) 00:18:50.359 4.527 - 4.551: 98.5938% ( 1) 00:18:50.359 4.622 - 4.646: 98.6015% ( 1) 00:18:50.359 5.618 - 5.641: 98.6092% ( 1) 00:18:50.359 5.641 - 5.665: 98.6169% ( 1) 00:18:50.359 5.950 - 5.973: 98.6246% ( 1) 00:18:50.359 6.044 - 6.068: 98.6322% ( 1) 00:18:50.359 6.068 - 6.116: 98.6399% ( 1) 00:18:50.359 6.163 - 6.210: 98.6476% ( 1) 00:18:50.359 6.210 - 6.258: 98.6630% ( 2) 00:18:50.359 6.305 - 6.353: 98.6707% ( 1) 00:18:50.359 6.353 - 6.400: 98.6783% ( 1) 00:18:50.359 6.400 - 6.447: 98.7091% ( 4) 00:18:50.359 6.590 - 6.637: 98.7168% ( 1) 00:18:50.359 6.637 - 6.684: 98.7245% ( 1) 00:18:50.359 6.684 - 6.732: 98.7321% ( 1) 00:18:50.360 6.874 - 6.921: 98.7475% ( 2) 00:18:50.360 7.111 - 7.159: 98.7552% ( 1) 00:18:50.360 7.159 - 7.206: 98.7629% ( 1) 00:18:50.360 7.206 - 7.253: 98.7782% ( 2) 00:18:50.360 7.348 - 7.396: 98.7859% ( 1) 00:18:50.360 7.396 - 7.443: 98.7936% ( 1) 00:18:50.360 7.964 - 8.012: 98.8013% ( 1) 00:18:50.360 8.059 - 8.107: 98.8090% ( 1) 00:18:50.360 8.676 - 8.723: 98.8167% ( 1) 00:18:50.360 8.770 - 8.818: 98.8243% ( 1) 00:18:50.360 9.671 - 9.719: 98.8320% ( 1) 00:18:50.360 14.222 - 14.317: 98.8397% ( 1) 00:18:50.360 15.265 - 15.360: 98.8474% ( 1) 00:18:50.360 15.550 - 15.644: 98.8551% ( 1) 00:18:50.360 15.644 - 15.739: 98.8858% ( 4) 00:18:50.360 15.739 - 15.834: 98.9012% ( 2) 00:18:50.360 15.834 - 15.929: 98.9242% ( 3) 00:18:50.360 15.929 - 16.024: 98.9550% ( 4) 00:18:50.360 16.024 - 16.119: 98.9780% ( 3) 00:18:50.360 16.119 - 16.213: 98.9857% ( 1) 00:18:50.360 16.213 - 16.308: 99.0088% ( 3) 00:18:50.360 16.308 - 16.403: 99.0395% ( 4) 00:18:50.360 16.403 - 16.498: 99.0856% ( 6) 00:18:50.360 16.498 - 16.593: 99.1394% ( 7) 00:18:50.360 16.593 - 16.687: 99.1624% ( 3) 00:18:50.360 16.687 - 16.782: 99.2085% ( 6) 00:18:50.360 16.782 - 16.877: 99.2777% ( 9) 00:18:50.360 16.877 - 16.972: 99.2854% ( 1) 00:18:50.360 16.972 - 17.067: 99.3008% ( 2) 00:18:50.360 17.067 - 17.161: 99.3161% ( 2) 00:18:50.360 17.161 - 17.256: 99.3315% ( 2) 00:18:50.360 17.256 - 17.351: 99.3392% ( 1) 00:18:50.360 17.541 - 17.636: 99.3469% ( 1) 00:18:50.360 17.636 - 17.730: 99.3545% ( 1) 00:18:50.360 17.730 - 17.825: 99.3699% ( 2) 00:18:50.360 18.110 - 18.204: 99.3776% ( 1) 00:18:50.360 18.394 - 18.489: 99.3853% ( 1) 00:18:50.360 19.437 - 19.532: 99.3930% ( 1) 00:18:50.360 22.376 - 22.471: 99.4006% ( 1) 00:18:50.360 28.065 - 28.255: 99.4083%[2024-11-26 18:14:38.353167] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:50.617 ( 1) 00:18:50.617 3980.705 - 4004.978: 99.8156% ( 53) 00:18:50.617 4004.978 - 4029.250: 100.0000% ( 24) 00:18:50.617 00:18:50.617 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:50.617 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:50.617 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:50.617 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:50.617 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:50.875 [ 00:18:50.875 { 00:18:50.875 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:50.875 "subtype": "Discovery", 00:18:50.875 "listen_addresses": [], 00:18:50.875 "allow_any_host": true, 00:18:50.875 "hosts": [] 00:18:50.875 }, 00:18:50.875 { 00:18:50.875 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:50.875 "subtype": "NVMe", 00:18:50.875 "listen_addresses": [ 00:18:50.875 { 00:18:50.875 "trtype": "VFIOUSER", 00:18:50.875 "adrfam": "IPv4", 00:18:50.875 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:50.875 "trsvcid": "0" 00:18:50.875 } 00:18:50.875 ], 00:18:50.875 "allow_any_host": true, 00:18:50.875 "hosts": [], 00:18:50.875 "serial_number": "SPDK1", 00:18:50.875 "model_number": "SPDK bdev Controller", 00:18:50.875 "max_namespaces": 32, 00:18:50.875 "min_cntlid": 1, 00:18:50.875 "max_cntlid": 65519, 00:18:50.875 "namespaces": [ 00:18:50.875 { 00:18:50.875 "nsid": 1, 00:18:50.875 "bdev_name": "Malloc1", 00:18:50.875 "name": "Malloc1", 00:18:50.875 "nguid": "2C8B5B6D070B437C8796F3C71A9682CA", 00:18:50.875 "uuid": "2c8b5b6d-070b-437c-8796-f3c71a9682ca" 00:18:50.875 }, 00:18:50.875 { 00:18:50.875 "nsid": 2, 00:18:50.875 "bdev_name": "Malloc3", 00:18:50.875 "name": "Malloc3", 00:18:50.875 "nguid": "B610ADF868EC4D86AFFB0BF96394865B", 00:18:50.875 "uuid": "b610adf8-68ec-4d86-affb-0bf96394865b" 00:18:50.875 } 00:18:50.875 ] 00:18:50.875 }, 00:18:50.875 { 00:18:50.875 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:50.875 "subtype": "NVMe", 00:18:50.875 "listen_addresses": [ 00:18:50.875 { 00:18:50.875 "trtype": "VFIOUSER", 00:18:50.875 "adrfam": "IPv4", 00:18:50.875 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:50.875 "trsvcid": "0" 00:18:50.875 } 00:18:50.875 ], 00:18:50.875 "allow_any_host": true, 00:18:50.875 "hosts": [], 00:18:50.875 "serial_number": "SPDK2", 00:18:50.875 "model_number": "SPDK bdev Controller", 00:18:50.875 "max_namespaces": 32, 00:18:50.875 "min_cntlid": 1, 00:18:50.875 "max_cntlid": 65519, 00:18:50.875 "namespaces": [ 00:18:50.875 { 00:18:50.875 "nsid": 1, 00:18:50.875 "bdev_name": "Malloc2", 00:18:50.875 "name": "Malloc2", 00:18:50.875 "nguid": "D58BC14F2F0D4D9F82B6C6BC81E6C1F0", 00:18:50.875 "uuid": "d58bc14f-2f0d-4d9f-82b6-c6bc81e6c1f0" 00:18:50.875 } 00:18:50.875 ] 00:18:50.875 } 00:18:50.875 ] 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=589703 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:50.875 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:51.133 [2024-11-26 18:14:38.897784] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:51.133 Malloc4 00:18:51.133 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:51.390 [2024-11-26 18:14:39.310972] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:51.390 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:51.390 Asynchronous Event Request test 00:18:51.390 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:51.390 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:51.390 Registering asynchronous event callbacks... 00:18:51.390 Starting namespace attribute notice tests for all controllers... 00:18:51.390 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:51.390 aer_cb - Changed Namespace 00:18:51.390 Cleaning up... 00:18:51.648 [ 00:18:51.648 { 00:18:51.648 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:51.648 "subtype": "Discovery", 00:18:51.648 "listen_addresses": [], 00:18:51.648 "allow_any_host": true, 00:18:51.648 "hosts": [] 00:18:51.648 }, 00:18:51.648 { 00:18:51.648 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:51.648 "subtype": "NVMe", 00:18:51.648 "listen_addresses": [ 00:18:51.648 { 00:18:51.648 "trtype": "VFIOUSER", 00:18:51.648 "adrfam": "IPv4", 00:18:51.648 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:51.648 "trsvcid": "0" 00:18:51.648 } 00:18:51.648 ], 00:18:51.648 "allow_any_host": true, 00:18:51.648 "hosts": [], 00:18:51.648 "serial_number": "SPDK1", 00:18:51.648 "model_number": "SPDK bdev Controller", 00:18:51.648 "max_namespaces": 32, 00:18:51.648 "min_cntlid": 1, 00:18:51.648 "max_cntlid": 65519, 00:18:51.648 "namespaces": [ 00:18:51.648 { 00:18:51.648 "nsid": 1, 00:18:51.648 "bdev_name": "Malloc1", 00:18:51.648 "name": "Malloc1", 00:18:51.648 "nguid": "2C8B5B6D070B437C8796F3C71A9682CA", 00:18:51.648 "uuid": "2c8b5b6d-070b-437c-8796-f3c71a9682ca" 00:18:51.648 }, 00:18:51.648 { 00:18:51.648 "nsid": 2, 00:18:51.648 "bdev_name": "Malloc3", 00:18:51.648 "name": "Malloc3", 00:18:51.648 "nguid": "B610ADF868EC4D86AFFB0BF96394865B", 00:18:51.648 "uuid": "b610adf8-68ec-4d86-affb-0bf96394865b" 00:18:51.648 } 00:18:51.648 ] 00:18:51.648 }, 00:18:51.648 { 00:18:51.648 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:51.648 "subtype": "NVMe", 00:18:51.648 "listen_addresses": [ 00:18:51.648 { 00:18:51.648 "trtype": "VFIOUSER", 00:18:51.648 "adrfam": "IPv4", 00:18:51.648 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:51.648 "trsvcid": "0" 00:18:51.648 } 00:18:51.648 ], 00:18:51.648 "allow_any_host": true, 00:18:51.648 "hosts": [], 00:18:51.648 "serial_number": "SPDK2", 00:18:51.648 "model_number": "SPDK bdev Controller", 00:18:51.648 "max_namespaces": 32, 00:18:51.648 "min_cntlid": 1, 00:18:51.648 "max_cntlid": 65519, 00:18:51.648 "namespaces": [ 00:18:51.648 { 00:18:51.648 "nsid": 1, 00:18:51.648 "bdev_name": "Malloc2", 00:18:51.648 "name": "Malloc2", 00:18:51.648 "nguid": "D58BC14F2F0D4D9F82B6C6BC81E6C1F0", 00:18:51.648 "uuid": "d58bc14f-2f0d-4d9f-82b6-c6bc81e6c1f0" 00:18:51.648 }, 00:18:51.648 { 00:18:51.648 "nsid": 2, 00:18:51.648 "bdev_name": "Malloc4", 00:18:51.648 "name": "Malloc4", 00:18:51.648 "nguid": "A841A9B318B9420D91C2D245CFB9263D", 00:18:51.648 "uuid": "a841a9b3-18b9-420d-91c2-d245cfb9263d" 00:18:51.648 } 00:18:51.648 ] 00:18:51.648 } 00:18:51.648 ] 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 589703 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 584084 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 584084 ']' 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 584084 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584084 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584084' 00:18:51.648 killing process with pid 584084 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 584084 00:18:51.648 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 584084 00:18:52.213 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=589852 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 589852' 00:18:52.214 Process pid: 589852 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 589852 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 589852 ']' 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.214 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:52.214 [2024-11-26 18:14:40.026593] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:52.214 [2024-11-26 18:14:40.027709] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:18:52.214 [2024-11-26 18:14:40.027786] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.214 [2024-11-26 18:14:40.096529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:52.214 [2024-11-26 18:14:40.152917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.214 [2024-11-26 18:14:40.152973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.214 [2024-11-26 18:14:40.152994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.214 [2024-11-26 18:14:40.153005] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.214 [2024-11-26 18:14:40.153013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.214 [2024-11-26 18:14:40.154670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.214 [2024-11-26 18:14:40.156767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.214 [2024-11-26 18:14:40.156918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:52.214 [2024-11-26 18:14:40.156922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.473 [2024-11-26 18:14:40.256378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:52.473 [2024-11-26 18:14:40.256516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:52.473 [2024-11-26 18:14:40.256811] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:52.473 [2024-11-26 18:14:40.257483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:52.473 [2024-11-26 18:14:40.257740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:52.473 18:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.473 18:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:52.473 18:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:53.409 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:53.666 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:53.666 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:53.666 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:53.666 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:53.666 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:53.924 Malloc1 00:18:53.924 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:54.489 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:54.489 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:55.054 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:55.054 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:55.054 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:55.312 Malloc2 00:18:55.312 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:55.593 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:55.928 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:55.928 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:55.928 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 589852 00:18:55.928 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 589852 ']' 00:18:55.928 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 589852 00:18:55.928 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:18:55.928 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.928 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589852 00:18:56.186 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.186 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.186 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589852' 00:18:56.186 killing process with pid 589852 00:18:56.186 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 589852 00:18:56.186 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 589852 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:56.444 00:18:56.444 real 0m53.695s 00:18:56.444 user 3m27.698s 00:18:56.444 sys 0m4.000s 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:56.444 ************************************ 00:18:56.444 END TEST nvmf_vfio_user 00:18:56.444 ************************************ 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:56.444 ************************************ 00:18:56.444 START TEST nvmf_vfio_user_nvme_compliance 00:18:56.444 ************************************ 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:56.444 * Looking for test storage... 00:18:56.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:56.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.444 --rc genhtml_branch_coverage=1 00:18:56.444 --rc genhtml_function_coverage=1 00:18:56.444 --rc genhtml_legend=1 00:18:56.444 --rc geninfo_all_blocks=1 00:18:56.444 --rc geninfo_unexecuted_blocks=1 00:18:56.444 00:18:56.444 ' 00:18:56.444 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:56.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.444 --rc genhtml_branch_coverage=1 00:18:56.444 --rc genhtml_function_coverage=1 00:18:56.444 --rc genhtml_legend=1 00:18:56.444 --rc geninfo_all_blocks=1 00:18:56.444 --rc geninfo_unexecuted_blocks=1 00:18:56.444 00:18:56.444 ' 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:56.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.445 --rc genhtml_branch_coverage=1 00:18:56.445 --rc genhtml_function_coverage=1 00:18:56.445 --rc genhtml_legend=1 00:18:56.445 --rc geninfo_all_blocks=1 00:18:56.445 --rc geninfo_unexecuted_blocks=1 00:18:56.445 00:18:56.445 ' 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:56.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:56.445 --rc genhtml_branch_coverage=1 00:18:56.445 --rc genhtml_function_coverage=1 00:18:56.445 --rc genhtml_legend=1 00:18:56.445 --rc geninfo_all_blocks=1 00:18:56.445 --rc geninfo_unexecuted_blocks=1 00:18:56.445 00:18:56.445 ' 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.445 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.703 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:56.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=590466 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 590466' 00:18:56.704 Process pid: 590466 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 590466 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 590466 ']' 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:56.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:56.704 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:56.704 [2024-11-26 18:14:44.513941] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:18:56.704 [2024-11-26 18:14:44.514013] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.704 [2024-11-26 18:14:44.590059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:56.704 [2024-11-26 18:14:44.659070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.704 [2024-11-26 18:14:44.659126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.704 [2024-11-26 18:14:44.659157] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.704 [2024-11-26 18:14:44.659174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.704 [2024-11-26 18:14:44.659191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.704 [2024-11-26 18:14:44.660918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.704 [2024-11-26 18:14:44.660991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.704 [2024-11-26 18:14:44.660984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.962 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.962 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:18:56.962 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:57.896 malloc0 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.896 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:58.154 00:18:58.154 00:18:58.154 CUnit - A unit testing framework for C - Version 2.1-3 00:18:58.154 http://cunit.sourceforge.net/ 00:18:58.154 00:18:58.154 00:18:58.154 Suite: nvme_compliance 00:18:58.154 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 18:14:46.051850] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.154 [2024-11-26 18:14:46.053297] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:58.154 [2024-11-26 18:14:46.053331] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:58.154 [2024-11-26 18:14:46.053344] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:58.154 [2024-11-26 18:14:46.054867] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.154 passed 00:18:58.154 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 18:14:46.138461] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.154 [2024-11-26 18:14:46.141485] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.412 passed 00:18:58.412 Test: admin_identify_ns ...[2024-11-26 18:14:46.229868] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.412 [2024-11-26 18:14:46.290338] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:58.412 [2024-11-26 18:14:46.298324] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:58.412 [2024-11-26 18:14:46.319444] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.412 passed 00:18:58.412 Test: admin_get_features_mandatory_features ...[2024-11-26 18:14:46.403156] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.412 [2024-11-26 18:14:46.406178] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.670 passed 00:18:58.671 Test: admin_get_features_optional_features ...[2024-11-26 18:14:46.491800] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.671 [2024-11-26 18:14:46.494817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.671 passed 00:18:58.671 Test: admin_set_features_number_of_queues ...[2024-11-26 18:14:46.577876] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.929 [2024-11-26 18:14:46.682423] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.929 passed 00:18:58.929 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 18:14:46.766097] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.929 [2024-11-26 18:14:46.769122] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:58.929 passed 00:18:58.929 Test: admin_get_log_page_with_lpo ...[2024-11-26 18:14:46.853334] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.929 [2024-11-26 18:14:46.919319] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:58.929 [2024-11-26 18:14:46.932398] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.187 passed 00:18:59.187 Test: fabric_property_get ...[2024-11-26 18:14:47.018925] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.187 [2024-11-26 18:14:47.020200] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:59.187 [2024-11-26 18:14:47.021949] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.187 passed 00:18:59.187 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 18:14:47.105499] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.187 [2024-11-26 18:14:47.106788] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:59.187 [2024-11-26 18:14:47.108522] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.187 passed 00:18:59.187 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 18:14:47.191716] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.445 [2024-11-26 18:14:47.275310] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:59.445 [2024-11-26 18:14:47.291316] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:59.445 [2024-11-26 18:14:47.296430] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.445 passed 00:18:59.445 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 18:14:47.381724] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.445 [2024-11-26 18:14:47.383001] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:59.445 [2024-11-26 18:14:47.384747] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.445 passed 00:18:59.703 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 18:14:47.468181] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.703 [2024-11-26 18:14:47.548315] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:59.703 [2024-11-26 18:14:47.572325] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:59.703 [2024-11-26 18:14:47.577439] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.703 passed 00:18:59.703 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 18:14:47.659607] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.703 [2024-11-26 18:14:47.660918] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:59.703 [2024-11-26 18:14:47.660957] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:59.703 [2024-11-26 18:14:47.662616] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.703 passed 00:18:59.960 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 18:14:47.747140] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.960 [2024-11-26 18:14:47.838342] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:59.960 [2024-11-26 18:14:47.846341] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:59.960 [2024-11-26 18:14:47.854314] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:59.960 [2024-11-26 18:14:47.862310] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:59.960 [2024-11-26 18:14:47.891426] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.960 passed 00:19:00.218 Test: admin_create_io_sq_verify_pc ...[2024-11-26 18:14:47.974767] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:00.218 [2024-11-26 18:14:47.990327] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:00.218 [2024-11-26 18:14:48.008190] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:00.218 passed 00:19:00.218 Test: admin_create_io_qp_max_qps ...[2024-11-26 18:14:48.092770] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.590 [2024-11-26 18:14:49.194319] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:01.590 [2024-11-26 18:14:49.579183] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:01.848 passed 00:19:01.848 Test: admin_create_io_sq_shared_cq ...[2024-11-26 18:14:49.663540] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:01.848 [2024-11-26 18:14:49.795312] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:01.848 [2024-11-26 18:14:49.832404] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:02.106 passed 00:19:02.106 00:19:02.106 Run Summary: Type Total Ran Passed Failed Inactive 00:19:02.106 suites 1 1 n/a 0 0 00:19:02.106 tests 18 18 18 0 0 00:19:02.106 asserts 360 360 360 0 n/a 00:19:02.106 00:19:02.106 Elapsed time = 1.570 seconds 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 590466 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 590466 ']' 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 590466 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 590466 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 590466' 00:19:02.106 killing process with pid 590466 00:19:02.106 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 590466 00:19:02.107 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 590466 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:02.365 00:19:02.365 real 0m5.893s 00:19:02.365 user 0m16.442s 00:19:02.365 sys 0m0.593s 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:02.365 ************************************ 00:19:02.365 END TEST nvmf_vfio_user_nvme_compliance 00:19:02.365 ************************************ 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.365 ************************************ 00:19:02.365 START TEST nvmf_vfio_user_fuzz 00:19:02.365 ************************************ 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:02.365 * Looking for test storage... 00:19:02.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.365 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.625 --rc genhtml_branch_coverage=1 00:19:02.625 --rc genhtml_function_coverage=1 00:19:02.625 --rc genhtml_legend=1 00:19:02.625 --rc geninfo_all_blocks=1 00:19:02.625 --rc geninfo_unexecuted_blocks=1 00:19:02.625 00:19:02.625 ' 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.625 --rc genhtml_branch_coverage=1 00:19:02.625 --rc genhtml_function_coverage=1 00:19:02.625 --rc genhtml_legend=1 00:19:02.625 --rc geninfo_all_blocks=1 00:19:02.625 --rc geninfo_unexecuted_blocks=1 00:19:02.625 00:19:02.625 ' 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.625 --rc genhtml_branch_coverage=1 00:19:02.625 --rc genhtml_function_coverage=1 00:19:02.625 --rc genhtml_legend=1 00:19:02.625 --rc geninfo_all_blocks=1 00:19:02.625 --rc geninfo_unexecuted_blocks=1 00:19:02.625 00:19:02.625 ' 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.625 --rc genhtml_branch_coverage=1 00:19:02.625 --rc genhtml_function_coverage=1 00:19:02.625 --rc genhtml_legend=1 00:19:02.625 --rc geninfo_all_blocks=1 00:19:02.625 --rc geninfo_unexecuted_blocks=1 00:19:02.625 00:19:02.625 ' 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.625 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=591315 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 591315' 00:19:02.626 Process pid: 591315 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 591315 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 591315 ']' 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.626 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:02.885 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.885 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:02.885 18:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 malloc0 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:03.817 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:35.886 Fuzzing completed. Shutting down the fuzz application 00:19:35.886 00:19:35.886 Dumping successful admin opcodes: 00:19:35.886 9, 10, 00:19:35.886 Dumping successful io opcodes: 00:19:35.886 0, 00:19:35.886 NS: 0x20000081ef00 I/O qp, Total commands completed: 683988, total successful commands: 2664, random_seed: 880496448 00:19:35.886 NS: 0x20000081ef00 admin qp, Total commands completed: 88192, total successful commands: 20, random_seed: 161928576 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 591315 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 591315 ']' 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 591315 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 591315 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 591315' 00:19:35.886 killing process with pid 591315 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 591315 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 591315 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:35.886 00:19:35.886 real 0m32.288s 00:19:35.886 user 0m30.095s 00:19:35.886 sys 0m30.832s 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:35.886 ************************************ 00:19:35.886 END TEST nvmf_vfio_user_fuzz 00:19:35.886 ************************************ 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:35.886 ************************************ 00:19:35.886 START TEST nvmf_auth_target 00:19:35.886 ************************************ 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:35.886 * Looking for test storage... 00:19:35.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.886 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:35.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.886 --rc genhtml_branch_coverage=1 00:19:35.886 --rc genhtml_function_coverage=1 00:19:35.886 --rc genhtml_legend=1 00:19:35.886 --rc geninfo_all_blocks=1 00:19:35.887 --rc geninfo_unexecuted_blocks=1 00:19:35.887 00:19:35.887 ' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:35.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.887 --rc genhtml_branch_coverage=1 00:19:35.887 --rc genhtml_function_coverage=1 00:19:35.887 --rc genhtml_legend=1 00:19:35.887 --rc geninfo_all_blocks=1 00:19:35.887 --rc geninfo_unexecuted_blocks=1 00:19:35.887 00:19:35.887 ' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:35.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.887 --rc genhtml_branch_coverage=1 00:19:35.887 --rc genhtml_function_coverage=1 00:19:35.887 --rc genhtml_legend=1 00:19:35.887 --rc geninfo_all_blocks=1 00:19:35.887 --rc geninfo_unexecuted_blocks=1 00:19:35.887 00:19:35.887 ' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:35.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.887 --rc genhtml_branch_coverage=1 00:19:35.887 --rc genhtml_function_coverage=1 00:19:35.887 --rc genhtml_legend=1 00:19:35.887 --rc geninfo_all_blocks=1 00:19:35.887 --rc geninfo_unexecuted_blocks=1 00:19:35.887 00:19:35.887 ' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:35.887 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:37.266 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:37.266 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:37.266 Found net devices under 0000:09:00.0: cvl_0_0 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:37.266 Found net devices under 0000:09:00.1: cvl_0_1 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:37.266 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:37.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:19:37.267 00:19:37.267 --- 10.0.0.2 ping statistics --- 00:19:37.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.267 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:37.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:19:37.267 00:19:37.267 --- 10.0.0.1 ping statistics --- 00:19:37.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.267 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.267 18:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=597295 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 597295 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 597295 ']' 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.267 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=597407 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8cd9284b067f30a4325235b3465090ef8877aeb84f175df5 00:19:37.526 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jI6 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8cd9284b067f30a4325235b3465090ef8877aeb84f175df5 0 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8cd9284b067f30a4325235b3465090ef8877aeb84f175df5 0 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8cd9284b067f30a4325235b3465090ef8877aeb84f175df5 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jI6 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jI6 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.jI6 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2fe7c15255db9e8e99ec4ffe14b4e0fe3de80cda5a7fd17907904087763c9760 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dfj 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2fe7c15255db9e8e99ec4ffe14b4e0fe3de80cda5a7fd17907904087763c9760 3 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2fe7c15255db9e8e99ec4ffe14b4e0fe3de80cda5a7fd17907904087763c9760 3 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2fe7c15255db9e8e99ec4ffe14b4e0fe3de80cda5a7fd17907904087763c9760 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dfj 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dfj 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.dfj 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=24abc983352495c9115490906ba8f52f 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gES 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 24abc983352495c9115490906ba8f52f 1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 24abc983352495c9115490906ba8f52f 1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=24abc983352495c9115490906ba8f52f 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gES 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gES 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.gES 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1825eff825c51319608c11c934116995d62630f591893cc1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ada 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1825eff825c51319608c11c934116995d62630f591893cc1 2 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1825eff825c51319608c11c934116995d62630f591893cc1 2 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1825eff825c51319608c11c934116995d62630f591893cc1 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:37.527 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ada 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ada 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Ada 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b9f18318f68408daa1162d06a66a177817e9589af7a585a9 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.3Xb 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b9f18318f68408daa1162d06a66a177817e9589af7a585a9 2 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b9f18318f68408daa1162d06a66a177817e9589af7a585a9 2 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b9f18318f68408daa1162d06a66a177817e9589af7a585a9 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.3Xb 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.3Xb 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.3Xb 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=202c424cbeb053c54454f800d288a6ab 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Tn5 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 202c424cbeb053c54454f800d288a6ab 1 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 202c424cbeb053c54454f800d288a6ab 1 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=202c424cbeb053c54454f800d288a6ab 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Tn5 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Tn5 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Tn5 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=78cc2a58403ff9f1b6ebe5b67e0b98bbd06d021a0c13c98bde49a0bbc30ac355 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ssA 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 78cc2a58403ff9f1b6ebe5b67e0b98bbd06d021a0c13c98bde49a0bbc30ac355 3 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 78cc2a58403ff9f1b6ebe5b67e0b98bbd06d021a0c13c98bde49a0bbc30ac355 3 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=78cc2a58403ff9f1b6ebe5b67e0b98bbd06d021a0c13c98bde49a0bbc30ac355 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ssA 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ssA 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ssA 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 597295 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 597295 ']' 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.787 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.045 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.046 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:38.046 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 597407 /var/tmp/host.sock 00:19:38.046 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 597407 ']' 00:19:38.046 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:38.046 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:38.046 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:38.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:38.046 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:38.046 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jI6 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jI6 00:19:38.305 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jI6 00:19:38.563 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.dfj ]] 00:19:38.563 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dfj 00:19:38.563 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.563 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.563 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.563 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dfj 00:19:38.563 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dfj 00:19:38.821 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:38.821 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gES 00:19:38.822 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.822 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.080 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.080 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.gES 00:19:39.080 18:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.gES 00:19:39.337 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Ada ]] 00:19:39.337 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ada 00:19:39.337 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.337 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.337 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.337 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ada 00:19:39.337 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ada 00:19:39.595 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:39.595 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3Xb 00:19:39.595 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.595 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.595 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.595 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.3Xb 00:19:39.595 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.3Xb 00:19:39.853 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Tn5 ]] 00:19:39.853 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tn5 00:19:39.853 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.853 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.853 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.853 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tn5 00:19:39.853 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tn5 00:19:40.119 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:40.119 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ssA 00:19:40.119 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.119 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.119 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.119 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ssA 00:19:40.119 18:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ssA 00:19:40.405 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:40.405 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:40.405 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.405 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.405 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.405 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:40.662 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.663 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.921 00:19:40.921 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.921 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.921 18:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.179 { 00:19:41.179 "cntlid": 1, 00:19:41.179 "qid": 0, 00:19:41.179 "state": "enabled", 00:19:41.179 "thread": "nvmf_tgt_poll_group_000", 00:19:41.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:41.179 "listen_address": { 00:19:41.179 "trtype": "TCP", 00:19:41.179 "adrfam": "IPv4", 00:19:41.179 "traddr": "10.0.0.2", 00:19:41.179 "trsvcid": "4420" 00:19:41.179 }, 00:19:41.179 "peer_address": { 00:19:41.179 "trtype": "TCP", 00:19:41.179 "adrfam": "IPv4", 00:19:41.179 "traddr": "10.0.0.1", 00:19:41.179 "trsvcid": "59234" 00:19:41.179 }, 00:19:41.179 "auth": { 00:19:41.179 "state": "completed", 00:19:41.179 "digest": "sha256", 00:19:41.179 "dhgroup": "null" 00:19:41.179 } 00:19:41.179 } 00:19:41.179 ]' 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:41.179 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.436 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.436 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.436 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.694 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:19:41.694 18:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:19:42.628 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.628 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:42.628 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.628 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.628 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.628 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.628 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.628 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.886 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.144 00:19:43.144 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.144 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.144 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.402 { 00:19:43.402 "cntlid": 3, 00:19:43.402 "qid": 0, 00:19:43.402 "state": "enabled", 00:19:43.402 "thread": "nvmf_tgt_poll_group_000", 00:19:43.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:43.402 "listen_address": { 00:19:43.402 "trtype": "TCP", 00:19:43.402 "adrfam": "IPv4", 00:19:43.402 "traddr": "10.0.0.2", 00:19:43.402 "trsvcid": "4420" 00:19:43.402 }, 00:19:43.402 "peer_address": { 00:19:43.402 "trtype": "TCP", 00:19:43.402 "adrfam": "IPv4", 00:19:43.402 "traddr": "10.0.0.1", 00:19:43.402 "trsvcid": "59268" 00:19:43.402 }, 00:19:43.402 "auth": { 00:19:43.402 "state": "completed", 00:19:43.402 "digest": "sha256", 00:19:43.402 "dhgroup": "null" 00:19:43.402 } 00:19:43.402 } 00:19:43.402 ]' 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.402 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.661 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:19:43.661 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:19:44.594 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.594 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:44.594 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.594 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.594 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.594 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.594 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.594 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.852 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.418 00:19:45.418 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.418 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.418 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.675 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.675 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.676 { 00:19:45.676 "cntlid": 5, 00:19:45.676 "qid": 0, 00:19:45.676 "state": "enabled", 00:19:45.676 "thread": "nvmf_tgt_poll_group_000", 00:19:45.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:45.676 "listen_address": { 00:19:45.676 "trtype": "TCP", 00:19:45.676 "adrfam": "IPv4", 00:19:45.676 "traddr": "10.0.0.2", 00:19:45.676 "trsvcid": "4420" 00:19:45.676 }, 00:19:45.676 "peer_address": { 00:19:45.676 "trtype": "TCP", 00:19:45.676 "adrfam": "IPv4", 00:19:45.676 "traddr": "10.0.0.1", 00:19:45.676 "trsvcid": "59294" 00:19:45.676 }, 00:19:45.676 "auth": { 00:19:45.676 "state": "completed", 00:19:45.676 "digest": "sha256", 00:19:45.676 "dhgroup": "null" 00:19:45.676 } 00:19:45.676 } 00:19:45.676 ]' 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.676 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.933 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:19:45.933 18:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:19:46.865 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.865 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:46.865 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.865 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.865 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.865 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.865 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:46.865 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.123 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.381 00:19:47.381 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.381 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.381 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.638 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.638 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.638 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.638 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.638 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.638 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.638 { 00:19:47.638 "cntlid": 7, 00:19:47.638 "qid": 0, 00:19:47.638 "state": "enabled", 00:19:47.638 "thread": "nvmf_tgt_poll_group_000", 00:19:47.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:47.638 "listen_address": { 00:19:47.638 "trtype": "TCP", 00:19:47.638 "adrfam": "IPv4", 00:19:47.638 "traddr": "10.0.0.2", 00:19:47.638 "trsvcid": "4420" 00:19:47.638 }, 00:19:47.638 "peer_address": { 00:19:47.638 "trtype": "TCP", 00:19:47.638 "adrfam": "IPv4", 00:19:47.638 "traddr": "10.0.0.1", 00:19:47.638 "trsvcid": "59332" 00:19:47.638 }, 00:19:47.638 "auth": { 00:19:47.638 "state": "completed", 00:19:47.638 "digest": "sha256", 00:19:47.638 "dhgroup": "null" 00:19:47.638 } 00:19:47.638 } 00:19:47.638 ]' 00:19:47.638 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.896 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.896 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.896 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:47.896 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.896 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.896 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.896 18:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.155 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:19:48.155 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.090 18:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.348 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.606 00:19:49.606 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.606 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.606 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.864 { 00:19:49.864 "cntlid": 9, 00:19:49.864 "qid": 0, 00:19:49.864 "state": "enabled", 00:19:49.864 "thread": "nvmf_tgt_poll_group_000", 00:19:49.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:49.864 "listen_address": { 00:19:49.864 "trtype": "TCP", 00:19:49.864 "adrfam": "IPv4", 00:19:49.864 "traddr": "10.0.0.2", 00:19:49.864 "trsvcid": "4420" 00:19:49.864 }, 00:19:49.864 "peer_address": { 00:19:49.864 "trtype": "TCP", 00:19:49.864 "adrfam": "IPv4", 00:19:49.864 "traddr": "10.0.0.1", 00:19:49.864 "trsvcid": "36416" 00:19:49.864 }, 00:19:49.864 "auth": { 00:19:49.864 "state": "completed", 00:19:49.864 "digest": "sha256", 00:19:49.864 "dhgroup": "ffdhe2048" 00:19:49.864 } 00:19:49.864 } 00:19:49.864 ]' 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.864 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.122 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:50.122 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.122 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.122 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.122 18:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.379 18:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:19:50.379 18:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:19:51.313 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.314 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:51.314 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.314 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.314 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.314 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.314 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.314 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.570 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:51.570 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.570 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.570 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.570 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.570 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.570 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.570 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.571 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.571 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.571 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.571 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.571 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.828 00:19:51.828 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.828 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.828 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.087 { 00:19:52.087 "cntlid": 11, 00:19:52.087 "qid": 0, 00:19:52.087 "state": "enabled", 00:19:52.087 "thread": "nvmf_tgt_poll_group_000", 00:19:52.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:52.087 "listen_address": { 00:19:52.087 "trtype": "TCP", 00:19:52.087 "adrfam": "IPv4", 00:19:52.087 "traddr": "10.0.0.2", 00:19:52.087 "trsvcid": "4420" 00:19:52.087 }, 00:19:52.087 "peer_address": { 00:19:52.087 "trtype": "TCP", 00:19:52.087 "adrfam": "IPv4", 00:19:52.087 "traddr": "10.0.0.1", 00:19:52.087 "trsvcid": "36454" 00:19:52.087 }, 00:19:52.087 "auth": { 00:19:52.087 "state": "completed", 00:19:52.087 "digest": "sha256", 00:19:52.087 "dhgroup": "ffdhe2048" 00:19:52.087 } 00:19:52.087 } 00:19:52.087 ]' 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.087 18:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.087 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:52.087 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.087 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.087 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.087 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.345 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:19:52.345 18:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:19:53.279 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.279 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:53.279 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.279 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.537 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.537 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.537 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.537 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.795 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.053 00:19:54.053 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.054 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.054 18:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.312 { 00:19:54.312 "cntlid": 13, 00:19:54.312 "qid": 0, 00:19:54.312 "state": "enabled", 00:19:54.312 "thread": "nvmf_tgt_poll_group_000", 00:19:54.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:54.312 "listen_address": { 00:19:54.312 "trtype": "TCP", 00:19:54.312 "adrfam": "IPv4", 00:19:54.312 "traddr": "10.0.0.2", 00:19:54.312 "trsvcid": "4420" 00:19:54.312 }, 00:19:54.312 "peer_address": { 00:19:54.312 "trtype": "TCP", 00:19:54.312 "adrfam": "IPv4", 00:19:54.312 "traddr": "10.0.0.1", 00:19:54.312 "trsvcid": "36464" 00:19:54.312 }, 00:19:54.312 "auth": { 00:19:54.312 "state": "completed", 00:19:54.312 "digest": "sha256", 00:19:54.312 "dhgroup": "ffdhe2048" 00:19:54.312 } 00:19:54.312 } 00:19:54.312 ]' 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.312 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.879 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:19:54.879 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:19:55.445 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.703 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:55.703 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.703 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.703 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.703 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.703 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.703 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:55.961 18:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.218 00:19:56.218 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.218 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.218 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.475 { 00:19:56.475 "cntlid": 15, 00:19:56.475 "qid": 0, 00:19:56.475 "state": "enabled", 00:19:56.475 "thread": "nvmf_tgt_poll_group_000", 00:19:56.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:56.475 "listen_address": { 00:19:56.475 "trtype": "TCP", 00:19:56.475 "adrfam": "IPv4", 00:19:56.475 "traddr": "10.0.0.2", 00:19:56.475 "trsvcid": "4420" 00:19:56.475 }, 00:19:56.475 "peer_address": { 00:19:56.475 "trtype": "TCP", 00:19:56.475 "adrfam": "IPv4", 00:19:56.475 "traddr": "10.0.0.1", 00:19:56.475 "trsvcid": "36484" 00:19:56.475 }, 00:19:56.475 "auth": { 00:19:56.475 "state": "completed", 00:19:56.475 "digest": "sha256", 00:19:56.475 "dhgroup": "ffdhe2048" 00:19:56.475 } 00:19:56.475 } 00:19:56.475 ]' 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.475 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.732 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.732 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.732 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.989 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:19:56.990 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.919 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.176 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.434 00:19:58.434 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.434 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.434 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.691 { 00:19:58.691 "cntlid": 17, 00:19:58.691 "qid": 0, 00:19:58.691 "state": "enabled", 00:19:58.691 "thread": "nvmf_tgt_poll_group_000", 00:19:58.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:58.691 "listen_address": { 00:19:58.691 "trtype": "TCP", 00:19:58.691 "adrfam": "IPv4", 00:19:58.691 "traddr": "10.0.0.2", 00:19:58.691 "trsvcid": "4420" 00:19:58.691 }, 00:19:58.691 "peer_address": { 00:19:58.691 "trtype": "TCP", 00:19:58.691 "adrfam": "IPv4", 00:19:58.691 "traddr": "10.0.0.1", 00:19:58.691 "trsvcid": "51018" 00:19:58.691 }, 00:19:58.691 "auth": { 00:19:58.691 "state": "completed", 00:19:58.691 "digest": "sha256", 00:19:58.691 "dhgroup": "ffdhe3072" 00:19:58.691 } 00:19:58.691 } 00:19:58.691 ]' 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.691 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.692 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.256 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:19:59.256 18:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:19:59.820 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.078 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:00.078 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.078 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.078 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.078 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.078 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.078 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.336 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.637 00:20:00.637 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.637 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.637 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.916 { 00:20:00.916 "cntlid": 19, 00:20:00.916 "qid": 0, 00:20:00.916 "state": "enabled", 00:20:00.916 "thread": "nvmf_tgt_poll_group_000", 00:20:00.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:00.916 "listen_address": { 00:20:00.916 "trtype": "TCP", 00:20:00.916 "adrfam": "IPv4", 00:20:00.916 "traddr": "10.0.0.2", 00:20:00.916 "trsvcid": "4420" 00:20:00.916 }, 00:20:00.916 "peer_address": { 00:20:00.916 "trtype": "TCP", 00:20:00.916 "adrfam": "IPv4", 00:20:00.916 "traddr": "10.0.0.1", 00:20:00.916 "trsvcid": "51056" 00:20:00.916 }, 00:20:00.916 "auth": { 00:20:00.916 "state": "completed", 00:20:00.916 "digest": "sha256", 00:20:00.916 "dhgroup": "ffdhe3072" 00:20:00.916 } 00:20:00.916 } 00:20:00.916 ]' 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.916 18:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.175 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:01.175 18:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:02.107 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.107 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:02.107 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.107 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.107 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.107 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.107 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.107 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.673 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.931 00:20:02.931 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.931 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.931 18:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.189 { 00:20:03.189 "cntlid": 21, 00:20:03.189 "qid": 0, 00:20:03.189 "state": "enabled", 00:20:03.189 "thread": "nvmf_tgt_poll_group_000", 00:20:03.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:03.189 "listen_address": { 00:20:03.189 "trtype": "TCP", 00:20:03.189 "adrfam": "IPv4", 00:20:03.189 "traddr": "10.0.0.2", 00:20:03.189 "trsvcid": "4420" 00:20:03.189 }, 00:20:03.189 "peer_address": { 00:20:03.189 "trtype": "TCP", 00:20:03.189 "adrfam": "IPv4", 00:20:03.189 "traddr": "10.0.0.1", 00:20:03.189 "trsvcid": "51078" 00:20:03.189 }, 00:20:03.189 "auth": { 00:20:03.189 "state": "completed", 00:20:03.189 "digest": "sha256", 00:20:03.189 "dhgroup": "ffdhe3072" 00:20:03.189 } 00:20:03.189 } 00:20:03.189 ]' 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.189 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.446 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:03.446 18:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:04.378 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.378 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:04.378 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.378 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.378 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.378 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.378 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.378 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:04.636 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.200 00:20:05.200 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.200 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.200 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.458 { 00:20:05.458 "cntlid": 23, 00:20:05.458 "qid": 0, 00:20:05.458 "state": "enabled", 00:20:05.458 "thread": "nvmf_tgt_poll_group_000", 00:20:05.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:05.458 "listen_address": { 00:20:05.458 "trtype": "TCP", 00:20:05.458 "adrfam": "IPv4", 00:20:05.458 "traddr": "10.0.0.2", 00:20:05.458 "trsvcid": "4420" 00:20:05.458 }, 00:20:05.458 "peer_address": { 00:20:05.458 "trtype": "TCP", 00:20:05.458 "adrfam": "IPv4", 00:20:05.458 "traddr": "10.0.0.1", 00:20:05.458 "trsvcid": "51108" 00:20:05.458 }, 00:20:05.458 "auth": { 00:20:05.458 "state": "completed", 00:20:05.458 "digest": "sha256", 00:20:05.458 "dhgroup": "ffdhe3072" 00:20:05.458 } 00:20:05.458 } 00:20:05.458 ]' 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.458 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.716 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:05.716 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.648 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.906 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.485 00:20:07.485 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.485 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.485 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.743 { 00:20:07.743 "cntlid": 25, 00:20:07.743 "qid": 0, 00:20:07.743 "state": "enabled", 00:20:07.743 "thread": "nvmf_tgt_poll_group_000", 00:20:07.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:07.743 "listen_address": { 00:20:07.743 "trtype": "TCP", 00:20:07.743 "adrfam": "IPv4", 00:20:07.743 "traddr": "10.0.0.2", 00:20:07.743 "trsvcid": "4420" 00:20:07.743 }, 00:20:07.743 "peer_address": { 00:20:07.743 "trtype": "TCP", 00:20:07.743 "adrfam": "IPv4", 00:20:07.743 "traddr": "10.0.0.1", 00:20:07.743 "trsvcid": "51142" 00:20:07.743 }, 00:20:07.743 "auth": { 00:20:07.743 "state": "completed", 00:20:07.743 "digest": "sha256", 00:20:07.743 "dhgroup": "ffdhe4096" 00:20:07.743 } 00:20:07.743 } 00:20:07.743 ]' 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.743 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.001 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:08.001 18:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:08.934 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.934 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:08.934 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.934 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.934 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.934 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.934 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.934 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.192 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.758 00:20:09.758 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.758 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.758 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.015 { 00:20:10.015 "cntlid": 27, 00:20:10.015 "qid": 0, 00:20:10.015 "state": "enabled", 00:20:10.015 "thread": "nvmf_tgt_poll_group_000", 00:20:10.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:10.015 "listen_address": { 00:20:10.015 "trtype": "TCP", 00:20:10.015 "adrfam": "IPv4", 00:20:10.015 "traddr": "10.0.0.2", 00:20:10.015 "trsvcid": "4420" 00:20:10.015 }, 00:20:10.015 "peer_address": { 00:20:10.015 "trtype": "TCP", 00:20:10.015 "adrfam": "IPv4", 00:20:10.015 "traddr": "10.0.0.1", 00:20:10.015 "trsvcid": "34456" 00:20:10.015 }, 00:20:10.015 "auth": { 00:20:10.015 "state": "completed", 00:20:10.015 "digest": "sha256", 00:20:10.015 "dhgroup": "ffdhe4096" 00:20:10.015 } 00:20:10.015 } 00:20:10.015 ]' 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.015 18:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.273 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:10.273 18:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:11.206 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.206 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:11.206 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.206 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.206 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.206 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.206 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.206 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.463 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.029 00:20:12.029 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.029 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.029 18:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.287 { 00:20:12.287 "cntlid": 29, 00:20:12.287 "qid": 0, 00:20:12.287 "state": "enabled", 00:20:12.287 "thread": "nvmf_tgt_poll_group_000", 00:20:12.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:12.287 "listen_address": { 00:20:12.287 "trtype": "TCP", 00:20:12.287 "adrfam": "IPv4", 00:20:12.287 "traddr": "10.0.0.2", 00:20:12.287 "trsvcid": "4420" 00:20:12.287 }, 00:20:12.287 "peer_address": { 00:20:12.287 "trtype": "TCP", 00:20:12.287 "adrfam": "IPv4", 00:20:12.287 "traddr": "10.0.0.1", 00:20:12.287 "trsvcid": "34480" 00:20:12.287 }, 00:20:12.287 "auth": { 00:20:12.287 "state": "completed", 00:20:12.287 "digest": "sha256", 00:20:12.287 "dhgroup": "ffdhe4096" 00:20:12.287 } 00:20:12.287 } 00:20:12.287 ]' 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.287 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.545 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:12.545 18:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:13.478 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.478 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:13.478 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.478 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.478 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.478 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.478 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.478 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:13.736 18:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:14.300 00:20:14.301 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.301 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.301 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.558 { 00:20:14.558 "cntlid": 31, 00:20:14.558 "qid": 0, 00:20:14.558 "state": "enabled", 00:20:14.558 "thread": "nvmf_tgt_poll_group_000", 00:20:14.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:14.558 "listen_address": { 00:20:14.558 "trtype": "TCP", 00:20:14.558 "adrfam": "IPv4", 00:20:14.558 "traddr": "10.0.0.2", 00:20:14.558 "trsvcid": "4420" 00:20:14.558 }, 00:20:14.558 "peer_address": { 00:20:14.558 "trtype": "TCP", 00:20:14.558 "adrfam": "IPv4", 00:20:14.558 "traddr": "10.0.0.1", 00:20:14.558 "trsvcid": "34516" 00:20:14.558 }, 00:20:14.558 "auth": { 00:20:14.558 "state": "completed", 00:20:14.558 "digest": "sha256", 00:20:14.558 "dhgroup": "ffdhe4096" 00:20:14.558 } 00:20:14.558 } 00:20:14.558 ]' 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.558 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.559 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.817 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:14.817 18:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:15.750 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.008 18:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.573 00:20:16.573 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.573 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.573 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.830 { 00:20:16.830 "cntlid": 33, 00:20:16.830 "qid": 0, 00:20:16.830 "state": "enabled", 00:20:16.830 "thread": "nvmf_tgt_poll_group_000", 00:20:16.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:16.830 "listen_address": { 00:20:16.830 "trtype": "TCP", 00:20:16.830 "adrfam": "IPv4", 00:20:16.830 "traddr": "10.0.0.2", 00:20:16.830 "trsvcid": "4420" 00:20:16.830 }, 00:20:16.830 "peer_address": { 00:20:16.830 "trtype": "TCP", 00:20:16.830 "adrfam": "IPv4", 00:20:16.830 "traddr": "10.0.0.1", 00:20:16.830 "trsvcid": "34546" 00:20:16.830 }, 00:20:16.830 "auth": { 00:20:16.830 "state": "completed", 00:20:16.830 "digest": "sha256", 00:20:16.830 "dhgroup": "ffdhe6144" 00:20:16.830 } 00:20:16.830 } 00:20:16.830 ]' 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.830 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.088 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.088 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.088 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.373 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:17.373 18:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.307 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.872 00:20:18.872 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.872 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.872 18:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.130 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.130 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.130 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.130 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.130 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.130 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.130 { 00:20:19.130 "cntlid": 35, 00:20:19.130 "qid": 0, 00:20:19.130 "state": "enabled", 00:20:19.130 "thread": "nvmf_tgt_poll_group_000", 00:20:19.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:19.130 "listen_address": { 00:20:19.130 "trtype": "TCP", 00:20:19.130 "adrfam": "IPv4", 00:20:19.130 "traddr": "10.0.0.2", 00:20:19.130 "trsvcid": "4420" 00:20:19.130 }, 00:20:19.130 "peer_address": { 00:20:19.130 "trtype": "TCP", 00:20:19.130 "adrfam": "IPv4", 00:20:19.130 "traddr": "10.0.0.1", 00:20:19.130 "trsvcid": "52890" 00:20:19.130 }, 00:20:19.130 "auth": { 00:20:19.130 "state": "completed", 00:20:19.130 "digest": "sha256", 00:20:19.130 "dhgroup": "ffdhe6144" 00:20:19.130 } 00:20:19.130 } 00:20:19.130 ]' 00:20:19.130 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.387 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.387 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.387 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.387 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.387 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.387 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.387 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.644 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:19.644 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.657 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.222 00:20:21.222 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.222 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.222 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.478 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.478 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.478 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.478 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.736 { 00:20:21.736 "cntlid": 37, 00:20:21.736 "qid": 0, 00:20:21.736 "state": "enabled", 00:20:21.736 "thread": "nvmf_tgt_poll_group_000", 00:20:21.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:21.736 "listen_address": { 00:20:21.736 "trtype": "TCP", 00:20:21.736 "adrfam": "IPv4", 00:20:21.736 "traddr": "10.0.0.2", 00:20:21.736 "trsvcid": "4420" 00:20:21.736 }, 00:20:21.736 "peer_address": { 00:20:21.736 "trtype": "TCP", 00:20:21.736 "adrfam": "IPv4", 00:20:21.736 "traddr": "10.0.0.1", 00:20:21.736 "trsvcid": "52916" 00:20:21.736 }, 00:20:21.736 "auth": { 00:20:21.736 "state": "completed", 00:20:21.736 "digest": "sha256", 00:20:21.736 "dhgroup": "ffdhe6144" 00:20:21.736 } 00:20:21.736 } 00:20:21.736 ]' 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.736 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.993 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:21.993 18:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:22.927 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.927 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:22.927 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.927 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.927 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.927 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.927 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.927 18:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.185 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.751 00:20:23.751 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.751 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.751 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.010 { 00:20:24.010 "cntlid": 39, 00:20:24.010 "qid": 0, 00:20:24.010 "state": "enabled", 00:20:24.010 "thread": "nvmf_tgt_poll_group_000", 00:20:24.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:24.010 "listen_address": { 00:20:24.010 "trtype": "TCP", 00:20:24.010 "adrfam": "IPv4", 00:20:24.010 "traddr": "10.0.0.2", 00:20:24.010 "trsvcid": "4420" 00:20:24.010 }, 00:20:24.010 "peer_address": { 00:20:24.010 "trtype": "TCP", 00:20:24.010 "adrfam": "IPv4", 00:20:24.010 "traddr": "10.0.0.1", 00:20:24.010 "trsvcid": "52962" 00:20:24.010 }, 00:20:24.010 "auth": { 00:20:24.010 "state": "completed", 00:20:24.010 "digest": "sha256", 00:20:24.010 "dhgroup": "ffdhe6144" 00:20:24.010 } 00:20:24.010 } 00:20:24.010 ]' 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:24.010 18:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.268 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.268 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.268 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.526 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:24.526 18:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.460 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.717 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.652 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.652 { 00:20:26.652 "cntlid": 41, 00:20:26.652 "qid": 0, 00:20:26.652 "state": "enabled", 00:20:26.652 "thread": "nvmf_tgt_poll_group_000", 00:20:26.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:26.652 "listen_address": { 00:20:26.652 "trtype": "TCP", 00:20:26.652 "adrfam": "IPv4", 00:20:26.652 "traddr": "10.0.0.2", 00:20:26.652 "trsvcid": "4420" 00:20:26.652 }, 00:20:26.652 "peer_address": { 00:20:26.652 "trtype": "TCP", 00:20:26.652 "adrfam": "IPv4", 00:20:26.652 "traddr": "10.0.0.1", 00:20:26.652 "trsvcid": "52996" 00:20:26.652 }, 00:20:26.652 "auth": { 00:20:26.652 "state": "completed", 00:20:26.652 "digest": "sha256", 00:20:26.652 "dhgroup": "ffdhe8192" 00:20:26.652 } 00:20:26.652 } 00:20:26.652 ]' 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.652 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.910 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.910 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.910 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.910 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.910 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.910 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.168 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:27.168 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:28.101 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.101 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.101 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.101 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.101 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.101 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.101 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.101 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.360 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.294 00:20:29.294 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.294 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.294 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.552 { 00:20:29.552 "cntlid": 43, 00:20:29.552 "qid": 0, 00:20:29.552 "state": "enabled", 00:20:29.552 "thread": "nvmf_tgt_poll_group_000", 00:20:29.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:29.552 "listen_address": { 00:20:29.552 "trtype": "TCP", 00:20:29.552 "adrfam": "IPv4", 00:20:29.552 "traddr": "10.0.0.2", 00:20:29.552 "trsvcid": "4420" 00:20:29.552 }, 00:20:29.552 "peer_address": { 00:20:29.552 "trtype": "TCP", 00:20:29.552 "adrfam": "IPv4", 00:20:29.552 "traddr": "10.0.0.1", 00:20:29.552 "trsvcid": "57346" 00:20:29.552 }, 00:20:29.552 "auth": { 00:20:29.552 "state": "completed", 00:20:29.552 "digest": "sha256", 00:20:29.552 "dhgroup": "ffdhe8192" 00:20:29.552 } 00:20:29.552 } 00:20:29.552 ]' 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.552 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.809 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:29.809 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:30.743 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.743 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:30.743 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.743 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.743 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.743 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.743 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.743 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.000 18:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.933 00:20:31.933 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.933 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.933 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.191 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.191 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.191 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.191 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.191 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.191 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.191 { 00:20:32.191 "cntlid": 45, 00:20:32.191 "qid": 0, 00:20:32.191 "state": "enabled", 00:20:32.191 "thread": "nvmf_tgt_poll_group_000", 00:20:32.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:32.191 "listen_address": { 00:20:32.191 "trtype": "TCP", 00:20:32.191 "adrfam": "IPv4", 00:20:32.192 "traddr": "10.0.0.2", 00:20:32.192 "trsvcid": "4420" 00:20:32.192 }, 00:20:32.192 "peer_address": { 00:20:32.192 "trtype": "TCP", 00:20:32.192 "adrfam": "IPv4", 00:20:32.192 "traddr": "10.0.0.1", 00:20:32.192 "trsvcid": "57376" 00:20:32.192 }, 00:20:32.192 "auth": { 00:20:32.192 "state": "completed", 00:20:32.192 "digest": "sha256", 00:20:32.192 "dhgroup": "ffdhe8192" 00:20:32.192 } 00:20:32.192 } 00:20:32.192 ]' 00:20:32.192 18:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.192 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.192 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.192 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.192 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.192 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.192 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.192 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.450 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:32.450 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:33.389 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.389 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:33.389 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.389 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.389 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.389 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.389 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.389 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.646 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.577 00:20:34.577 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.577 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.577 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.834 { 00:20:34.834 "cntlid": 47, 00:20:34.834 "qid": 0, 00:20:34.834 "state": "enabled", 00:20:34.834 "thread": "nvmf_tgt_poll_group_000", 00:20:34.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:34.834 "listen_address": { 00:20:34.834 "trtype": "TCP", 00:20:34.834 "adrfam": "IPv4", 00:20:34.834 "traddr": "10.0.0.2", 00:20:34.834 "trsvcid": "4420" 00:20:34.834 }, 00:20:34.834 "peer_address": { 00:20:34.834 "trtype": "TCP", 00:20:34.834 "adrfam": "IPv4", 00:20:34.834 "traddr": "10.0.0.1", 00:20:34.834 "trsvcid": "57412" 00:20:34.834 }, 00:20:34.834 "auth": { 00:20:34.834 "state": "completed", 00:20:34.834 "digest": "sha256", 00:20:34.834 "dhgroup": "ffdhe8192" 00:20:34.834 } 00:20:34.834 } 00:20:34.834 ]' 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.834 18:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.092 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:35.092 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:36.038 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.038 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.295 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.858 00:20:36.858 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.858 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.858 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.115 { 00:20:37.115 "cntlid": 49, 00:20:37.115 "qid": 0, 00:20:37.115 "state": "enabled", 00:20:37.115 "thread": "nvmf_tgt_poll_group_000", 00:20:37.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:37.115 "listen_address": { 00:20:37.115 "trtype": "TCP", 00:20:37.115 "adrfam": "IPv4", 00:20:37.115 "traddr": "10.0.0.2", 00:20:37.115 "trsvcid": "4420" 00:20:37.115 }, 00:20:37.115 "peer_address": { 00:20:37.115 "trtype": "TCP", 00:20:37.115 "adrfam": "IPv4", 00:20:37.115 "traddr": "10.0.0.1", 00:20:37.115 "trsvcid": "57442" 00:20:37.115 }, 00:20:37.115 "auth": { 00:20:37.115 "state": "completed", 00:20:37.115 "digest": "sha384", 00:20:37.115 "dhgroup": "null" 00:20:37.115 } 00:20:37.115 } 00:20:37.115 ]' 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.115 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:37.116 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.116 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.116 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.116 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.373 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:37.373 18:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:38.306 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.306 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:38.306 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.306 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.306 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.306 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.306 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.306 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.564 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:38.564 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.564 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.564 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.565 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.132 00:20:39.132 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.132 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.132 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.390 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.390 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.390 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.390 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.390 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.390 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.390 { 00:20:39.390 "cntlid": 51, 00:20:39.390 "qid": 0, 00:20:39.390 "state": "enabled", 00:20:39.390 "thread": "nvmf_tgt_poll_group_000", 00:20:39.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:39.390 "listen_address": { 00:20:39.390 "trtype": "TCP", 00:20:39.390 "adrfam": "IPv4", 00:20:39.390 "traddr": "10.0.0.2", 00:20:39.390 "trsvcid": "4420" 00:20:39.390 }, 00:20:39.390 "peer_address": { 00:20:39.390 "trtype": "TCP", 00:20:39.390 "adrfam": "IPv4", 00:20:39.390 "traddr": "10.0.0.1", 00:20:39.390 "trsvcid": "33718" 00:20:39.390 }, 00:20:39.390 "auth": { 00:20:39.390 "state": "completed", 00:20:39.390 "digest": "sha384", 00:20:39.390 "dhgroup": "null" 00:20:39.390 } 00:20:39.390 } 00:20:39.390 ]' 00:20:39.390 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.390 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.391 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.391 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:39.391 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.391 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.391 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.391 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.649 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:39.649 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:40.583 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.583 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:40.583 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.583 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.583 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.583 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.583 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.583 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.850 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.465 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.465 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.465 { 00:20:41.465 "cntlid": 53, 00:20:41.465 "qid": 0, 00:20:41.465 "state": "enabled", 00:20:41.465 "thread": "nvmf_tgt_poll_group_000", 00:20:41.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:41.465 "listen_address": { 00:20:41.465 "trtype": "TCP", 00:20:41.465 "adrfam": "IPv4", 00:20:41.465 "traddr": "10.0.0.2", 00:20:41.465 "trsvcid": "4420" 00:20:41.465 }, 00:20:41.465 "peer_address": { 00:20:41.465 "trtype": "TCP", 00:20:41.465 "adrfam": "IPv4", 00:20:41.465 "traddr": "10.0.0.1", 00:20:41.465 "trsvcid": "33742" 00:20:41.465 }, 00:20:41.465 "auth": { 00:20:41.465 "state": "completed", 00:20:41.465 "digest": "sha384", 00:20:41.465 "dhgroup": "null" 00:20:41.465 } 00:20:41.465 } 00:20:41.465 ]' 00:20:41.724 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.724 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.724 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.724 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:41.724 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.724 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.724 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.724 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.981 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:41.981 18:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:42.916 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.916 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.916 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.916 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.916 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.916 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.916 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.916 18:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.173 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:43.739 00:20:43.739 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.739 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.739 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.997 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.997 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.997 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.997 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.997 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.997 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.997 { 00:20:43.997 "cntlid": 55, 00:20:43.997 "qid": 0, 00:20:43.997 "state": "enabled", 00:20:43.997 "thread": "nvmf_tgt_poll_group_000", 00:20:43.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:43.997 "listen_address": { 00:20:43.997 "trtype": "TCP", 00:20:43.997 "adrfam": "IPv4", 00:20:43.997 "traddr": "10.0.0.2", 00:20:43.997 "trsvcid": "4420" 00:20:43.997 }, 00:20:43.997 "peer_address": { 00:20:43.997 "trtype": "TCP", 00:20:43.997 "adrfam": "IPv4", 00:20:43.997 "traddr": "10.0.0.1", 00:20:43.997 "trsvcid": "33766" 00:20:43.997 }, 00:20:43.997 "auth": { 00:20:43.997 "state": "completed", 00:20:43.997 "digest": "sha384", 00:20:43.997 "dhgroup": "null" 00:20:43.997 } 00:20:43.997 } 00:20:43.997 ]' 00:20:43.998 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.998 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.998 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.998 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.998 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.998 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.998 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.998 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.256 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:44.256 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.189 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.448 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.013 00:20:46.013 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.013 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.013 18:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.271 { 00:20:46.271 "cntlid": 57, 00:20:46.271 "qid": 0, 00:20:46.271 "state": "enabled", 00:20:46.271 "thread": "nvmf_tgt_poll_group_000", 00:20:46.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:46.271 "listen_address": { 00:20:46.271 "trtype": "TCP", 00:20:46.271 "adrfam": "IPv4", 00:20:46.271 "traddr": "10.0.0.2", 00:20:46.271 "trsvcid": "4420" 00:20:46.271 }, 00:20:46.271 "peer_address": { 00:20:46.271 "trtype": "TCP", 00:20:46.271 "adrfam": "IPv4", 00:20:46.271 "traddr": "10.0.0.1", 00:20:46.271 "trsvcid": "33784" 00:20:46.271 }, 00:20:46.271 "auth": { 00:20:46.271 "state": "completed", 00:20:46.271 "digest": "sha384", 00:20:46.271 "dhgroup": "ffdhe2048" 00:20:46.271 } 00:20:46.271 } 00:20:46.271 ]' 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.271 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.529 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:46.530 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:47.465 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.465 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:47.465 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.465 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.465 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.465 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.465 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.465 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.724 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.290 00:20:48.290 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.290 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.290 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.290 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.290 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.290 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.290 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.548 { 00:20:48.548 "cntlid": 59, 00:20:48.548 "qid": 0, 00:20:48.548 "state": "enabled", 00:20:48.548 "thread": "nvmf_tgt_poll_group_000", 00:20:48.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:48.548 "listen_address": { 00:20:48.548 "trtype": "TCP", 00:20:48.548 "adrfam": "IPv4", 00:20:48.548 "traddr": "10.0.0.2", 00:20:48.548 "trsvcid": "4420" 00:20:48.548 }, 00:20:48.548 "peer_address": { 00:20:48.548 "trtype": "TCP", 00:20:48.548 "adrfam": "IPv4", 00:20:48.548 "traddr": "10.0.0.1", 00:20:48.548 "trsvcid": "33808" 00:20:48.548 }, 00:20:48.548 "auth": { 00:20:48.548 "state": "completed", 00:20:48.548 "digest": "sha384", 00:20:48.548 "dhgroup": "ffdhe2048" 00:20:48.548 } 00:20:48.548 } 00:20:48.548 ]' 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.548 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.805 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:48.805 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:49.739 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.739 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:49.739 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.739 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.739 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.739 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.739 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.739 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.997 18:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.255 00:20:50.255 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.255 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.255 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.513 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.513 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.513 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.513 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.513 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.513 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.513 { 00:20:50.513 "cntlid": 61, 00:20:50.513 "qid": 0, 00:20:50.513 "state": "enabled", 00:20:50.513 "thread": "nvmf_tgt_poll_group_000", 00:20:50.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:50.513 "listen_address": { 00:20:50.513 "trtype": "TCP", 00:20:50.513 "adrfam": "IPv4", 00:20:50.513 "traddr": "10.0.0.2", 00:20:50.513 "trsvcid": "4420" 00:20:50.513 }, 00:20:50.513 "peer_address": { 00:20:50.513 "trtype": "TCP", 00:20:50.513 "adrfam": "IPv4", 00:20:50.513 "traddr": "10.0.0.1", 00:20:50.513 "trsvcid": "45576" 00:20:50.513 }, 00:20:50.513 "auth": { 00:20:50.513 "state": "completed", 00:20:50.513 "digest": "sha384", 00:20:50.513 "dhgroup": "ffdhe2048" 00:20:50.513 } 00:20:50.513 } 00:20:50.513 ]' 00:20:50.513 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.772 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.772 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.772 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.772 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.772 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.772 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.772 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.030 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:51.030 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:20:51.972 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.972 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:51.972 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.972 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.972 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.972 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.972 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.972 18:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.231 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.796 00:20:52.796 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.796 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.796 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.053 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.053 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.053 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.053 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.053 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.053 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.053 { 00:20:53.053 "cntlid": 63, 00:20:53.053 "qid": 0, 00:20:53.053 "state": "enabled", 00:20:53.053 "thread": "nvmf_tgt_poll_group_000", 00:20:53.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:53.053 "listen_address": { 00:20:53.053 "trtype": "TCP", 00:20:53.053 "adrfam": "IPv4", 00:20:53.053 "traddr": "10.0.0.2", 00:20:53.053 "trsvcid": "4420" 00:20:53.053 }, 00:20:53.053 "peer_address": { 00:20:53.053 "trtype": "TCP", 00:20:53.053 "adrfam": "IPv4", 00:20:53.053 "traddr": "10.0.0.1", 00:20:53.053 "trsvcid": "45608" 00:20:53.053 }, 00:20:53.053 "auth": { 00:20:53.053 "state": "completed", 00:20:53.053 "digest": "sha384", 00:20:53.053 "dhgroup": "ffdhe2048" 00:20:53.053 } 00:20:53.053 } 00:20:53.053 ]' 00:20:53.054 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.054 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.054 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.054 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:53.054 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.054 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.054 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.054 18:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.311 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:53.311 18:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.244 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.810 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:54.810 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.810 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.810 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.811 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.068 00:20:55.068 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.068 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.068 18:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.329 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.329 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.329 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.329 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.330 { 00:20:55.330 "cntlid": 65, 00:20:55.330 "qid": 0, 00:20:55.330 "state": "enabled", 00:20:55.330 "thread": "nvmf_tgt_poll_group_000", 00:20:55.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:55.330 "listen_address": { 00:20:55.330 "trtype": "TCP", 00:20:55.330 "adrfam": "IPv4", 00:20:55.330 "traddr": "10.0.0.2", 00:20:55.330 "trsvcid": "4420" 00:20:55.330 }, 00:20:55.330 "peer_address": { 00:20:55.330 "trtype": "TCP", 00:20:55.330 "adrfam": "IPv4", 00:20:55.330 "traddr": "10.0.0.1", 00:20:55.330 "trsvcid": "45628" 00:20:55.330 }, 00:20:55.330 "auth": { 00:20:55.330 "state": "completed", 00:20:55.330 "digest": "sha384", 00:20:55.330 "dhgroup": "ffdhe3072" 00:20:55.330 } 00:20:55.330 } 00:20:55.330 ]' 00:20:55.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.330 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.331 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.331 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.331 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.592 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:55.592 18:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:20:56.524 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.524 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:56.524 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.524 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.524 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.524 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.524 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.524 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.090 18:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.348 00:20:57.348 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.348 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.348 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.606 { 00:20:57.606 "cntlid": 67, 00:20:57.606 "qid": 0, 00:20:57.606 "state": "enabled", 00:20:57.606 "thread": "nvmf_tgt_poll_group_000", 00:20:57.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:57.606 "listen_address": { 00:20:57.606 "trtype": "TCP", 00:20:57.606 "adrfam": "IPv4", 00:20:57.606 "traddr": "10.0.0.2", 00:20:57.606 "trsvcid": "4420" 00:20:57.606 }, 00:20:57.606 "peer_address": { 00:20:57.606 "trtype": "TCP", 00:20:57.606 "adrfam": "IPv4", 00:20:57.606 "traddr": "10.0.0.1", 00:20:57.606 "trsvcid": "45648" 00:20:57.606 }, 00:20:57.606 "auth": { 00:20:57.606 "state": "completed", 00:20:57.606 "digest": "sha384", 00:20:57.606 "dhgroup": "ffdhe3072" 00:20:57.606 } 00:20:57.606 } 00:20:57.606 ]' 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.606 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.863 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.863 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.863 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.121 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:58.121 18:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:20:59.055 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.055 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:59.055 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.055 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.055 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.055 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.055 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.055 18:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.313 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.570 00:20:59.570 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.570 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.570 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.827 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.828 { 00:20:59.828 "cntlid": 69, 00:20:59.828 "qid": 0, 00:20:59.828 "state": "enabled", 00:20:59.828 "thread": "nvmf_tgt_poll_group_000", 00:20:59.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:59.828 "listen_address": { 00:20:59.828 "trtype": "TCP", 00:20:59.828 "adrfam": "IPv4", 00:20:59.828 "traddr": "10.0.0.2", 00:20:59.828 "trsvcid": "4420" 00:20:59.828 }, 00:20:59.828 "peer_address": { 00:20:59.828 "trtype": "TCP", 00:20:59.828 "adrfam": "IPv4", 00:20:59.828 "traddr": "10.0.0.1", 00:20:59.828 "trsvcid": "47776" 00:20:59.828 }, 00:20:59.828 "auth": { 00:20:59.828 "state": "completed", 00:20:59.828 "digest": "sha384", 00:20:59.828 "dhgroup": "ffdhe3072" 00:20:59.828 } 00:20:59.828 } 00:20:59.828 ]' 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.828 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.085 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.085 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.085 18:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.343 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:00.343 18:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:01.312 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.312 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.312 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.312 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.312 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.312 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.312 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.313 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.878 00:21:01.878 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.878 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.878 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.134 { 00:21:02.134 "cntlid": 71, 00:21:02.134 "qid": 0, 00:21:02.134 "state": "enabled", 00:21:02.134 "thread": "nvmf_tgt_poll_group_000", 00:21:02.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:02.134 "listen_address": { 00:21:02.134 "trtype": "TCP", 00:21:02.134 "adrfam": "IPv4", 00:21:02.134 "traddr": "10.0.0.2", 00:21:02.134 "trsvcid": "4420" 00:21:02.134 }, 00:21:02.134 "peer_address": { 00:21:02.134 "trtype": "TCP", 00:21:02.134 "adrfam": "IPv4", 00:21:02.134 "traddr": "10.0.0.1", 00:21:02.134 "trsvcid": "47810" 00:21:02.134 }, 00:21:02.134 "auth": { 00:21:02.134 "state": "completed", 00:21:02.134 "digest": "sha384", 00:21:02.134 "dhgroup": "ffdhe3072" 00:21:02.134 } 00:21:02.134 } 00:21:02.134 ]' 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.134 18:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.134 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:02.134 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.134 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.134 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.134 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.390 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:02.390 18:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.322 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.579 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.152 00:21:04.152 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.152 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.152 18:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.152 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.152 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.152 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.152 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.152 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.152 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.152 { 00:21:04.152 "cntlid": 73, 00:21:04.152 "qid": 0, 00:21:04.152 "state": "enabled", 00:21:04.152 "thread": "nvmf_tgt_poll_group_000", 00:21:04.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:04.152 "listen_address": { 00:21:04.152 "trtype": "TCP", 00:21:04.152 "adrfam": "IPv4", 00:21:04.152 "traddr": "10.0.0.2", 00:21:04.152 "trsvcid": "4420" 00:21:04.152 }, 00:21:04.152 "peer_address": { 00:21:04.152 "trtype": "TCP", 00:21:04.152 "adrfam": "IPv4", 00:21:04.152 "traddr": "10.0.0.1", 00:21:04.152 "trsvcid": "47828" 00:21:04.152 }, 00:21:04.152 "auth": { 00:21:04.152 "state": "completed", 00:21:04.152 "digest": "sha384", 00:21:04.152 "dhgroup": "ffdhe4096" 00:21:04.152 } 00:21:04.152 } 00:21:04.152 ]' 00:21:04.152 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.410 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.410 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.410 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.410 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.410 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.410 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.410 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.668 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:04.668 18:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:05.602 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.602 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:05.602 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.602 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.602 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.602 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.603 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.603 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.860 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:05.860 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.860 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.860 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.860 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.860 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.861 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.861 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.861 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.861 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.861 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.861 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.861 18:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.118 00:21:06.376 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.376 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.376 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.634 { 00:21:06.634 "cntlid": 75, 00:21:06.634 "qid": 0, 00:21:06.634 "state": "enabled", 00:21:06.634 "thread": "nvmf_tgt_poll_group_000", 00:21:06.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:06.634 "listen_address": { 00:21:06.634 "trtype": "TCP", 00:21:06.634 "adrfam": "IPv4", 00:21:06.634 "traddr": "10.0.0.2", 00:21:06.634 "trsvcid": "4420" 00:21:06.634 }, 00:21:06.634 "peer_address": { 00:21:06.634 "trtype": "TCP", 00:21:06.634 "adrfam": "IPv4", 00:21:06.634 "traddr": "10.0.0.1", 00:21:06.634 "trsvcid": "47856" 00:21:06.634 }, 00:21:06.634 "auth": { 00:21:06.634 "state": "completed", 00:21:06.634 "digest": "sha384", 00:21:06.634 "dhgroup": "ffdhe4096" 00:21:06.634 } 00:21:06.634 } 00:21:06.634 ]' 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.634 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.892 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:06.893 18:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:07.826 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.826 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:07.826 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.826 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.826 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.826 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.826 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:07.826 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.084 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:08.084 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.084 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.084 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:08.084 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.085 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.085 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.085 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.085 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.085 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.085 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.085 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.085 18:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.342 00:21:08.600 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.600 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.600 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.858 { 00:21:08.858 "cntlid": 77, 00:21:08.858 "qid": 0, 00:21:08.858 "state": "enabled", 00:21:08.858 "thread": "nvmf_tgt_poll_group_000", 00:21:08.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:08.858 "listen_address": { 00:21:08.858 "trtype": "TCP", 00:21:08.858 "adrfam": "IPv4", 00:21:08.858 "traddr": "10.0.0.2", 00:21:08.858 "trsvcid": "4420" 00:21:08.858 }, 00:21:08.858 "peer_address": { 00:21:08.858 "trtype": "TCP", 00:21:08.858 "adrfam": "IPv4", 00:21:08.858 "traddr": "10.0.0.1", 00:21:08.858 "trsvcid": "50138" 00:21:08.858 }, 00:21:08.858 "auth": { 00:21:08.858 "state": "completed", 00:21:08.858 "digest": "sha384", 00:21:08.858 "dhgroup": "ffdhe4096" 00:21:08.858 } 00:21:08.858 } 00:21:08.858 ]' 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.858 18:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.116 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:09.116 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:10.049 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.049 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:10.049 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.049 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.049 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.049 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.049 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:10.049 18:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.307 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.872 00:21:10.872 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.872 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.872 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.129 { 00:21:11.129 "cntlid": 79, 00:21:11.129 "qid": 0, 00:21:11.129 "state": "enabled", 00:21:11.129 "thread": "nvmf_tgt_poll_group_000", 00:21:11.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:11.129 "listen_address": { 00:21:11.129 "trtype": "TCP", 00:21:11.129 "adrfam": "IPv4", 00:21:11.129 "traddr": "10.0.0.2", 00:21:11.129 "trsvcid": "4420" 00:21:11.129 }, 00:21:11.129 "peer_address": { 00:21:11.129 "trtype": "TCP", 00:21:11.129 "adrfam": "IPv4", 00:21:11.129 "traddr": "10.0.0.1", 00:21:11.129 "trsvcid": "50164" 00:21:11.129 }, 00:21:11.129 "auth": { 00:21:11.129 "state": "completed", 00:21:11.129 "digest": "sha384", 00:21:11.129 "dhgroup": "ffdhe4096" 00:21:11.129 } 00:21:11.129 } 00:21:11.129 ]' 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.129 18:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.129 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.129 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.129 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.386 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:11.386 18:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:12.317 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.317 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:12.318 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.318 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.318 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.318 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.318 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.318 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.318 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.575 18:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.140 00:21:13.140 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.140 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.140 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.398 { 00:21:13.398 "cntlid": 81, 00:21:13.398 "qid": 0, 00:21:13.398 "state": "enabled", 00:21:13.398 "thread": "nvmf_tgt_poll_group_000", 00:21:13.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:13.398 "listen_address": { 00:21:13.398 "trtype": "TCP", 00:21:13.398 "adrfam": "IPv4", 00:21:13.398 "traddr": "10.0.0.2", 00:21:13.398 "trsvcid": "4420" 00:21:13.398 }, 00:21:13.398 "peer_address": { 00:21:13.398 "trtype": "TCP", 00:21:13.398 "adrfam": "IPv4", 00:21:13.398 "traddr": "10.0.0.1", 00:21:13.398 "trsvcid": "50186" 00:21:13.398 }, 00:21:13.398 "auth": { 00:21:13.398 "state": "completed", 00:21:13.398 "digest": "sha384", 00:21:13.398 "dhgroup": "ffdhe6144" 00:21:13.398 } 00:21:13.398 } 00:21:13.398 ]' 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.398 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.655 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.655 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.655 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.912 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:13.912 18:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:14.844 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.844 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:14.844 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.844 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.844 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.844 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.844 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.844 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.101 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.102 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.102 18:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.665 00:21:15.665 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.665 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.665 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.922 { 00:21:15.922 "cntlid": 83, 00:21:15.922 "qid": 0, 00:21:15.922 "state": "enabled", 00:21:15.922 "thread": "nvmf_tgt_poll_group_000", 00:21:15.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:15.922 "listen_address": { 00:21:15.922 "trtype": "TCP", 00:21:15.922 "adrfam": "IPv4", 00:21:15.922 "traddr": "10.0.0.2", 00:21:15.922 "trsvcid": "4420" 00:21:15.922 }, 00:21:15.922 "peer_address": { 00:21:15.922 "trtype": "TCP", 00:21:15.922 "adrfam": "IPv4", 00:21:15.922 "traddr": "10.0.0.1", 00:21:15.922 "trsvcid": "50220" 00:21:15.922 }, 00:21:15.922 "auth": { 00:21:15.922 "state": "completed", 00:21:15.922 "digest": "sha384", 00:21:15.922 "dhgroup": "ffdhe6144" 00:21:15.922 } 00:21:15.922 } 00:21:15.922 ]' 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.922 18:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.180 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:16.180 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:17.111 18:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.111 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:17.111 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.111 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.111 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.111 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.111 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.111 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.368 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.934 00:21:18.193 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.193 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.193 18:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.450 { 00:21:18.450 "cntlid": 85, 00:21:18.450 "qid": 0, 00:21:18.450 "state": "enabled", 00:21:18.450 "thread": "nvmf_tgt_poll_group_000", 00:21:18.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:18.450 "listen_address": { 00:21:18.450 "trtype": "TCP", 00:21:18.450 "adrfam": "IPv4", 00:21:18.450 "traddr": "10.0.0.2", 00:21:18.450 "trsvcid": "4420" 00:21:18.450 }, 00:21:18.450 "peer_address": { 00:21:18.450 "trtype": "TCP", 00:21:18.450 "adrfam": "IPv4", 00:21:18.450 "traddr": "10.0.0.1", 00:21:18.450 "trsvcid": "50264" 00:21:18.450 }, 00:21:18.450 "auth": { 00:21:18.450 "state": "completed", 00:21:18.450 "digest": "sha384", 00:21:18.450 "dhgroup": "ffdhe6144" 00:21:18.450 } 00:21:18.450 } 00:21:18.450 ]' 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.450 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.708 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:18.708 18:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:19.643 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.643 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:19.643 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.643 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.643 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.643 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.643 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.643 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:19.900 18:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:20.465 00:21:20.465 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.466 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.466 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.751 { 00:21:20.751 "cntlid": 87, 00:21:20.751 "qid": 0, 00:21:20.751 "state": "enabled", 00:21:20.751 "thread": "nvmf_tgt_poll_group_000", 00:21:20.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:20.751 "listen_address": { 00:21:20.751 "trtype": "TCP", 00:21:20.751 "adrfam": "IPv4", 00:21:20.751 "traddr": "10.0.0.2", 00:21:20.751 "trsvcid": "4420" 00:21:20.751 }, 00:21:20.751 "peer_address": { 00:21:20.751 "trtype": "TCP", 00:21:20.751 "adrfam": "IPv4", 00:21:20.751 "traddr": "10.0.0.1", 00:21:20.751 "trsvcid": "58748" 00:21:20.751 }, 00:21:20.751 "auth": { 00:21:20.751 "state": "completed", 00:21:20.751 "digest": "sha384", 00:21:20.751 "dhgroup": "ffdhe6144" 00:21:20.751 } 00:21:20.751 } 00:21:20.751 ]' 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.751 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.033 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.033 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.033 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.033 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.033 18:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.314 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:21.314 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.247 18:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.247 18:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.181 00:21:23.181 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.181 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.181 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.439 { 00:21:23.439 "cntlid": 89, 00:21:23.439 "qid": 0, 00:21:23.439 "state": "enabled", 00:21:23.439 "thread": "nvmf_tgt_poll_group_000", 00:21:23.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:23.439 "listen_address": { 00:21:23.439 "trtype": "TCP", 00:21:23.439 "adrfam": "IPv4", 00:21:23.439 "traddr": "10.0.0.2", 00:21:23.439 "trsvcid": "4420" 00:21:23.439 }, 00:21:23.439 "peer_address": { 00:21:23.439 "trtype": "TCP", 00:21:23.439 "adrfam": "IPv4", 00:21:23.439 "traddr": "10.0.0.1", 00:21:23.439 "trsvcid": "58790" 00:21:23.439 }, 00:21:23.439 "auth": { 00:21:23.439 "state": "completed", 00:21:23.439 "digest": "sha384", 00:21:23.439 "dhgroup": "ffdhe8192" 00:21:23.439 } 00:21:23.439 } 00:21:23.439 ]' 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.439 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.697 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.697 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.697 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.697 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.697 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.955 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:23.955 18:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:24.887 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.887 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:24.887 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.887 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.887 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.887 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.887 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.887 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.145 18:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.081 00:21:26.081 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.081 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.081 18:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.081 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.081 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.081 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.081 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.339 { 00:21:26.339 "cntlid": 91, 00:21:26.339 "qid": 0, 00:21:26.339 "state": "enabled", 00:21:26.339 "thread": "nvmf_tgt_poll_group_000", 00:21:26.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:26.339 "listen_address": { 00:21:26.339 "trtype": "TCP", 00:21:26.339 "adrfam": "IPv4", 00:21:26.339 "traddr": "10.0.0.2", 00:21:26.339 "trsvcid": "4420" 00:21:26.339 }, 00:21:26.339 "peer_address": { 00:21:26.339 "trtype": "TCP", 00:21:26.339 "adrfam": "IPv4", 00:21:26.339 "traddr": "10.0.0.1", 00:21:26.339 "trsvcid": "58806" 00:21:26.339 }, 00:21:26.339 "auth": { 00:21:26.339 "state": "completed", 00:21:26.339 "digest": "sha384", 00:21:26.339 "dhgroup": "ffdhe8192" 00:21:26.339 } 00:21:26.339 } 00:21:26.339 ]' 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.339 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.597 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:26.597 18:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:27.530 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.530 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:27.530 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.530 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.530 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.530 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.530 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.530 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.788 18:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.722 00:21:28.722 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.722 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.722 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.980 { 00:21:28.980 "cntlid": 93, 00:21:28.980 "qid": 0, 00:21:28.980 "state": "enabled", 00:21:28.980 "thread": "nvmf_tgt_poll_group_000", 00:21:28.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:28.980 "listen_address": { 00:21:28.980 "trtype": "TCP", 00:21:28.980 "adrfam": "IPv4", 00:21:28.980 "traddr": "10.0.0.2", 00:21:28.980 "trsvcid": "4420" 00:21:28.980 }, 00:21:28.980 "peer_address": { 00:21:28.980 "trtype": "TCP", 00:21:28.980 "adrfam": "IPv4", 00:21:28.980 "traddr": "10.0.0.1", 00:21:28.980 "trsvcid": "58824" 00:21:28.980 }, 00:21:28.980 "auth": { 00:21:28.980 "state": "completed", 00:21:28.980 "digest": "sha384", 00:21:28.980 "dhgroup": "ffdhe8192" 00:21:28.980 } 00:21:28.980 } 00:21:28.980 ]' 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.980 18:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.238 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:29.238 18:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:30.172 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.172 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:30.172 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.172 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.172 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.172 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.172 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.172 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.430 18:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.365 00:21:31.365 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.365 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.365 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.622 { 00:21:31.622 "cntlid": 95, 00:21:31.622 "qid": 0, 00:21:31.622 "state": "enabled", 00:21:31.622 "thread": "nvmf_tgt_poll_group_000", 00:21:31.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:31.622 "listen_address": { 00:21:31.622 "trtype": "TCP", 00:21:31.622 "adrfam": "IPv4", 00:21:31.622 "traddr": "10.0.0.2", 00:21:31.622 "trsvcid": "4420" 00:21:31.622 }, 00:21:31.622 "peer_address": { 00:21:31.622 "trtype": "TCP", 00:21:31.622 "adrfam": "IPv4", 00:21:31.622 "traddr": "10.0.0.1", 00:21:31.622 "trsvcid": "33642" 00:21:31.622 }, 00:21:31.622 "auth": { 00:21:31.622 "state": "completed", 00:21:31.622 "digest": "sha384", 00:21:31.622 "dhgroup": "ffdhe8192" 00:21:31.622 } 00:21:31.622 } 00:21:31.622 ]' 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.622 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.880 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.880 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.880 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.880 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.880 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.138 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:32.138 18:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.072 18:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.331 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.897 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.897 { 00:21:33.897 "cntlid": 97, 00:21:33.897 "qid": 0, 00:21:33.897 "state": "enabled", 00:21:33.897 "thread": "nvmf_tgt_poll_group_000", 00:21:33.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:33.897 "listen_address": { 00:21:33.897 "trtype": "TCP", 00:21:33.897 "adrfam": "IPv4", 00:21:33.897 "traddr": "10.0.0.2", 00:21:33.897 "trsvcid": "4420" 00:21:33.897 }, 00:21:33.897 "peer_address": { 00:21:33.897 "trtype": "TCP", 00:21:33.897 "adrfam": "IPv4", 00:21:33.897 "traddr": "10.0.0.1", 00:21:33.897 "trsvcid": "33662" 00:21:33.897 }, 00:21:33.897 "auth": { 00:21:33.897 "state": "completed", 00:21:33.897 "digest": "sha512", 00:21:33.897 "dhgroup": "null" 00:21:33.897 } 00:21:33.897 } 00:21:33.897 ]' 00:21:33.897 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.155 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.155 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.155 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:34.155 18:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.155 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.155 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.155 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.414 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:34.414 18:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:35.349 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.349 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:35.349 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.349 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.349 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.349 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.349 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.349 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.608 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.173 00:21:36.173 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.173 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.173 18:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.431 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.431 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.431 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.431 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.431 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.431 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.431 { 00:21:36.431 "cntlid": 99, 00:21:36.431 "qid": 0, 00:21:36.431 "state": "enabled", 00:21:36.431 "thread": "nvmf_tgt_poll_group_000", 00:21:36.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:36.431 "listen_address": { 00:21:36.431 "trtype": "TCP", 00:21:36.431 "adrfam": "IPv4", 00:21:36.431 "traddr": "10.0.0.2", 00:21:36.431 "trsvcid": "4420" 00:21:36.431 }, 00:21:36.431 "peer_address": { 00:21:36.431 "trtype": "TCP", 00:21:36.431 "adrfam": "IPv4", 00:21:36.431 "traddr": "10.0.0.1", 00:21:36.431 "trsvcid": "33706" 00:21:36.431 }, 00:21:36.431 "auth": { 00:21:36.431 "state": "completed", 00:21:36.431 "digest": "sha512", 00:21:36.432 "dhgroup": "null" 00:21:36.432 } 00:21:36.432 } 00:21:36.432 ]' 00:21:36.432 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.432 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.432 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.432 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:36.432 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.432 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.432 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.432 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.689 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:36.689 18:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:37.622 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.622 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:37.622 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.622 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.622 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.622 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.622 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.622 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.880 18:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.138 00:21:38.138 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.138 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.138 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.705 { 00:21:38.705 "cntlid": 101, 00:21:38.705 "qid": 0, 00:21:38.705 "state": "enabled", 00:21:38.705 "thread": "nvmf_tgt_poll_group_000", 00:21:38.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:38.705 "listen_address": { 00:21:38.705 "trtype": "TCP", 00:21:38.705 "adrfam": "IPv4", 00:21:38.705 "traddr": "10.0.0.2", 00:21:38.705 "trsvcid": "4420" 00:21:38.705 }, 00:21:38.705 "peer_address": { 00:21:38.705 "trtype": "TCP", 00:21:38.705 "adrfam": "IPv4", 00:21:38.705 "traddr": "10.0.0.1", 00:21:38.705 "trsvcid": "33830" 00:21:38.705 }, 00:21:38.705 "auth": { 00:21:38.705 "state": "completed", 00:21:38.705 "digest": "sha512", 00:21:38.705 "dhgroup": "null" 00:21:38.705 } 00:21:38.705 } 00:21:38.705 ]' 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.705 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.964 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:38.964 18:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:39.914 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.914 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.914 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.914 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.914 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.914 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.914 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.914 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.171 18:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.171 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.171 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.171 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.171 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.441 00:21:40.441 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.441 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.441 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.752 { 00:21:40.752 "cntlid": 103, 00:21:40.752 "qid": 0, 00:21:40.752 "state": "enabled", 00:21:40.752 "thread": "nvmf_tgt_poll_group_000", 00:21:40.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:40.752 "listen_address": { 00:21:40.752 "trtype": "TCP", 00:21:40.752 "adrfam": "IPv4", 00:21:40.752 "traddr": "10.0.0.2", 00:21:40.752 "trsvcid": "4420" 00:21:40.752 }, 00:21:40.752 "peer_address": { 00:21:40.752 "trtype": "TCP", 00:21:40.752 "adrfam": "IPv4", 00:21:40.752 "traddr": "10.0.0.1", 00:21:40.752 "trsvcid": "33848" 00:21:40.752 }, 00:21:40.752 "auth": { 00:21:40.752 "state": "completed", 00:21:40.752 "digest": "sha512", 00:21:40.752 "dhgroup": "null" 00:21:40.752 } 00:21:40.752 } 00:21:40.752 ]' 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.752 18:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.317 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:41.317 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.250 18:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.250 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.817 00:21:42.817 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.817 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.817 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.076 { 00:21:43.076 "cntlid": 105, 00:21:43.076 "qid": 0, 00:21:43.076 "state": "enabled", 00:21:43.076 "thread": "nvmf_tgt_poll_group_000", 00:21:43.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:43.076 "listen_address": { 00:21:43.076 "trtype": "TCP", 00:21:43.076 "adrfam": "IPv4", 00:21:43.076 "traddr": "10.0.0.2", 00:21:43.076 "trsvcid": "4420" 00:21:43.076 }, 00:21:43.076 "peer_address": { 00:21:43.076 "trtype": "TCP", 00:21:43.076 "adrfam": "IPv4", 00:21:43.076 "traddr": "10.0.0.1", 00:21:43.076 "trsvcid": "33882" 00:21:43.076 }, 00:21:43.076 "auth": { 00:21:43.076 "state": "completed", 00:21:43.076 "digest": "sha512", 00:21:43.076 "dhgroup": "ffdhe2048" 00:21:43.076 } 00:21:43.076 } 00:21:43.076 ]' 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.076 18:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.334 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:43.334 18:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:44.269 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.269 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:44.269 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.269 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.269 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.269 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.269 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.269 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:44.527 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.528 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.093 00:21:45.093 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.094 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.094 18:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.351 { 00:21:45.351 "cntlid": 107, 00:21:45.351 "qid": 0, 00:21:45.351 "state": "enabled", 00:21:45.351 "thread": "nvmf_tgt_poll_group_000", 00:21:45.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:45.351 "listen_address": { 00:21:45.351 "trtype": "TCP", 00:21:45.351 "adrfam": "IPv4", 00:21:45.351 "traddr": "10.0.0.2", 00:21:45.351 "trsvcid": "4420" 00:21:45.351 }, 00:21:45.351 "peer_address": { 00:21:45.351 "trtype": "TCP", 00:21:45.351 "adrfam": "IPv4", 00:21:45.351 "traddr": "10.0.0.1", 00:21:45.351 "trsvcid": "33922" 00:21:45.351 }, 00:21:45.351 "auth": { 00:21:45.351 "state": "completed", 00:21:45.351 "digest": "sha512", 00:21:45.351 "dhgroup": "ffdhe2048" 00:21:45.351 } 00:21:45.351 } 00:21:45.351 ]' 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.351 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.609 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:45.610 18:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:46.546 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.546 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:46.546 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.546 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.546 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.546 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.546 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.546 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.803 18:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.369 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.369 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.369 { 00:21:47.369 "cntlid": 109, 00:21:47.369 "qid": 0, 00:21:47.369 "state": "enabled", 00:21:47.369 "thread": "nvmf_tgt_poll_group_000", 00:21:47.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:47.369 "listen_address": { 00:21:47.369 "trtype": "TCP", 00:21:47.369 "adrfam": "IPv4", 00:21:47.369 "traddr": "10.0.0.2", 00:21:47.369 "trsvcid": "4420" 00:21:47.369 }, 00:21:47.369 "peer_address": { 00:21:47.369 "trtype": "TCP", 00:21:47.369 "adrfam": "IPv4", 00:21:47.369 "traddr": "10.0.0.1", 00:21:47.369 "trsvcid": "33936" 00:21:47.369 }, 00:21:47.369 "auth": { 00:21:47.369 "state": "completed", 00:21:47.369 "digest": "sha512", 00:21:47.369 "dhgroup": "ffdhe2048" 00:21:47.369 } 00:21:47.369 } 00:21:47.369 ]' 00:21:47.627 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.627 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.627 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.627 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.627 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.627 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.627 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.627 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.885 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:47.885 18:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:48.821 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.821 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:48.821 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.821 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.821 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.821 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.821 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.821 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.078 18:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.336 00:21:49.336 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.336 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.336 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.593 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.593 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.593 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.593 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.593 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.593 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.593 { 00:21:49.593 "cntlid": 111, 00:21:49.593 "qid": 0, 00:21:49.593 "state": "enabled", 00:21:49.593 "thread": "nvmf_tgt_poll_group_000", 00:21:49.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:49.593 "listen_address": { 00:21:49.593 "trtype": "TCP", 00:21:49.593 "adrfam": "IPv4", 00:21:49.593 "traddr": "10.0.0.2", 00:21:49.593 "trsvcid": "4420" 00:21:49.593 }, 00:21:49.593 "peer_address": { 00:21:49.593 "trtype": "TCP", 00:21:49.593 "adrfam": "IPv4", 00:21:49.593 "traddr": "10.0.0.1", 00:21:49.593 "trsvcid": "40342" 00:21:49.593 }, 00:21:49.593 "auth": { 00:21:49.593 "state": "completed", 00:21:49.593 "digest": "sha512", 00:21:49.593 "dhgroup": "ffdhe2048" 00:21:49.593 } 00:21:49.593 } 00:21:49.593 ]' 00:21:49.593 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.849 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.849 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.849 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.849 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.849 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.849 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.849 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.106 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:50.106 18:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.037 18:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:51.300 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:51.300 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.300 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.300 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:51.300 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.301 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.301 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.301 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.301 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.301 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.301 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.301 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.301 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.569 00:21:51.569 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.569 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.569 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.826 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.826 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.826 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.826 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.826 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.826 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.826 { 00:21:51.826 "cntlid": 113, 00:21:51.826 "qid": 0, 00:21:51.826 "state": "enabled", 00:21:51.826 "thread": "nvmf_tgt_poll_group_000", 00:21:51.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:51.826 "listen_address": { 00:21:51.826 "trtype": "TCP", 00:21:51.826 "adrfam": "IPv4", 00:21:51.826 "traddr": "10.0.0.2", 00:21:51.826 "trsvcid": "4420" 00:21:51.826 }, 00:21:51.826 "peer_address": { 00:21:51.826 "trtype": "TCP", 00:21:51.826 "adrfam": "IPv4", 00:21:51.826 "traddr": "10.0.0.1", 00:21:51.826 "trsvcid": "40370" 00:21:51.826 }, 00:21:51.826 "auth": { 00:21:51.826 "state": "completed", 00:21:51.826 "digest": "sha512", 00:21:51.826 "dhgroup": "ffdhe3072" 00:21:51.826 } 00:21:51.826 } 00:21:51.826 ]' 00:21:51.826 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.084 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.084 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.084 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.084 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.084 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.084 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.084 18:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.341 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:52.341 18:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:21:53.274 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.274 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:53.274 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.274 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.274 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.274 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.274 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.274 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.531 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.789 00:21:53.789 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.789 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.789 18:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.047 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.047 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.047 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.047 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.047 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.047 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.047 { 00:21:54.047 "cntlid": 115, 00:21:54.047 "qid": 0, 00:21:54.047 "state": "enabled", 00:21:54.047 "thread": "nvmf_tgt_poll_group_000", 00:21:54.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:54.047 "listen_address": { 00:21:54.047 "trtype": "TCP", 00:21:54.047 "adrfam": "IPv4", 00:21:54.047 "traddr": "10.0.0.2", 00:21:54.047 "trsvcid": "4420" 00:21:54.047 }, 00:21:54.047 "peer_address": { 00:21:54.047 "trtype": "TCP", 00:21:54.047 "adrfam": "IPv4", 00:21:54.047 "traddr": "10.0.0.1", 00:21:54.047 "trsvcid": "40410" 00:21:54.047 }, 00:21:54.047 "auth": { 00:21:54.047 "state": "completed", 00:21:54.047 "digest": "sha512", 00:21:54.047 "dhgroup": "ffdhe3072" 00:21:54.047 } 00:21:54.047 } 00:21:54.047 ]' 00:21:54.305 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.305 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.305 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.305 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.305 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.305 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.305 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.305 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.562 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:54.562 18:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:21:55.496 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.496 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:55.496 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.496 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.496 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.496 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.496 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.496 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.754 18:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.012 00:21:56.012 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.012 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.270 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.528 { 00:21:56.528 "cntlid": 117, 00:21:56.528 "qid": 0, 00:21:56.528 "state": "enabled", 00:21:56.528 "thread": "nvmf_tgt_poll_group_000", 00:21:56.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:56.528 "listen_address": { 00:21:56.528 "trtype": "TCP", 00:21:56.528 "adrfam": "IPv4", 00:21:56.528 "traddr": "10.0.0.2", 00:21:56.528 "trsvcid": "4420" 00:21:56.528 }, 00:21:56.528 "peer_address": { 00:21:56.528 "trtype": "TCP", 00:21:56.528 "adrfam": "IPv4", 00:21:56.528 "traddr": "10.0.0.1", 00:21:56.528 "trsvcid": "40436" 00:21:56.528 }, 00:21:56.528 "auth": { 00:21:56.528 "state": "completed", 00:21:56.528 "digest": "sha512", 00:21:56.528 "dhgroup": "ffdhe3072" 00:21:56.528 } 00:21:56.528 } 00:21:56.528 ]' 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.528 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.786 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:56.786 18:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:21:57.719 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.719 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:57.719 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.719 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.719 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.719 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.719 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.719 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.977 18:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.542 00:21:58.542 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.542 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.542 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.802 { 00:21:58.802 "cntlid": 119, 00:21:58.802 "qid": 0, 00:21:58.802 "state": "enabled", 00:21:58.802 "thread": "nvmf_tgt_poll_group_000", 00:21:58.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:58.802 "listen_address": { 00:21:58.802 "trtype": "TCP", 00:21:58.802 "adrfam": "IPv4", 00:21:58.802 "traddr": "10.0.0.2", 00:21:58.802 "trsvcid": "4420" 00:21:58.802 }, 00:21:58.802 "peer_address": { 00:21:58.802 "trtype": "TCP", 00:21:58.802 "adrfam": "IPv4", 00:21:58.802 "traddr": "10.0.0.1", 00:21:58.802 "trsvcid": "46352" 00:21:58.802 }, 00:21:58.802 "auth": { 00:21:58.802 "state": "completed", 00:21:58.802 "digest": "sha512", 00:21:58.802 "dhgroup": "ffdhe3072" 00:21:58.802 } 00:21:58.802 } 00:21:58.802 ]' 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.802 18:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.060 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:59.060 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:59.997 18:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.254 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.880 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.880 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.880 { 00:22:00.880 "cntlid": 121, 00:22:00.880 "qid": 0, 00:22:00.880 "state": "enabled", 00:22:00.880 "thread": "nvmf_tgt_poll_group_000", 00:22:00.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:00.880 "listen_address": { 00:22:00.880 "trtype": "TCP", 00:22:00.880 "adrfam": "IPv4", 00:22:00.880 "traddr": "10.0.0.2", 00:22:00.880 "trsvcid": "4420" 00:22:00.880 }, 00:22:00.880 "peer_address": { 00:22:00.880 "trtype": "TCP", 00:22:00.880 "adrfam": "IPv4", 00:22:00.880 "traddr": "10.0.0.1", 00:22:00.880 "trsvcid": "46388" 00:22:00.880 }, 00:22:00.880 "auth": { 00:22:00.880 "state": "completed", 00:22:00.880 "digest": "sha512", 00:22:00.880 "dhgroup": "ffdhe4096" 00:22:00.880 } 00:22:00.880 } 00:22:00.880 ]' 00:22:01.138 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.138 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.138 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.138 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.138 18:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.138 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.138 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.138 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.397 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:22:01.398 18:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:22:02.332 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.332 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:02.332 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.332 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.332 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.332 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.332 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.332 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.590 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.157 00:22:03.157 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.157 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.157 18:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.157 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.157 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.157 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.157 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.416 { 00:22:03.416 "cntlid": 123, 00:22:03.416 "qid": 0, 00:22:03.416 "state": "enabled", 00:22:03.416 "thread": "nvmf_tgt_poll_group_000", 00:22:03.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:03.416 "listen_address": { 00:22:03.416 "trtype": "TCP", 00:22:03.416 "adrfam": "IPv4", 00:22:03.416 "traddr": "10.0.0.2", 00:22:03.416 "trsvcid": "4420" 00:22:03.416 }, 00:22:03.416 "peer_address": { 00:22:03.416 "trtype": "TCP", 00:22:03.416 "adrfam": "IPv4", 00:22:03.416 "traddr": "10.0.0.1", 00:22:03.416 "trsvcid": "46418" 00:22:03.416 }, 00:22:03.416 "auth": { 00:22:03.416 "state": "completed", 00:22:03.416 "digest": "sha512", 00:22:03.416 "dhgroup": "ffdhe4096" 00:22:03.416 } 00:22:03.416 } 00:22:03.416 ]' 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.416 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.676 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:22:03.676 18:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:22:04.615 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.615 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:04.615 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.615 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.615 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.615 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.615 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:04.615 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.872 18:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.130 00:22:05.130 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.130 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.130 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.388 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.388 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.388 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.388 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.388 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.388 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.388 { 00:22:05.388 "cntlid": 125, 00:22:05.388 "qid": 0, 00:22:05.388 "state": "enabled", 00:22:05.388 "thread": "nvmf_tgt_poll_group_000", 00:22:05.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:05.388 "listen_address": { 00:22:05.388 "trtype": "TCP", 00:22:05.388 "adrfam": "IPv4", 00:22:05.388 "traddr": "10.0.0.2", 00:22:05.388 "trsvcid": "4420" 00:22:05.388 }, 00:22:05.388 "peer_address": { 00:22:05.388 "trtype": "TCP", 00:22:05.388 "adrfam": "IPv4", 00:22:05.388 "traddr": "10.0.0.1", 00:22:05.388 "trsvcid": "46434" 00:22:05.388 }, 00:22:05.388 "auth": { 00:22:05.388 "state": "completed", 00:22:05.388 "digest": "sha512", 00:22:05.388 "dhgroup": "ffdhe4096" 00:22:05.388 } 00:22:05.388 } 00:22:05.388 ]' 00:22:05.388 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.645 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.645 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.645 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:05.645 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.645 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.645 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.645 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.903 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:22:05.903 18:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:22:06.835 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.835 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.836 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.836 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.836 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.836 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.836 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:06.836 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.093 18:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.351 00:22:07.351 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.351 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.351 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.609 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.609 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.609 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.609 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.867 { 00:22:07.867 "cntlid": 127, 00:22:07.867 "qid": 0, 00:22:07.867 "state": "enabled", 00:22:07.867 "thread": "nvmf_tgt_poll_group_000", 00:22:07.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:07.867 "listen_address": { 00:22:07.867 "trtype": "TCP", 00:22:07.867 "adrfam": "IPv4", 00:22:07.867 "traddr": "10.0.0.2", 00:22:07.867 "trsvcid": "4420" 00:22:07.867 }, 00:22:07.867 "peer_address": { 00:22:07.867 "trtype": "TCP", 00:22:07.867 "adrfam": "IPv4", 00:22:07.867 "traddr": "10.0.0.1", 00:22:07.867 "trsvcid": "46458" 00:22:07.867 }, 00:22:07.867 "auth": { 00:22:07.867 "state": "completed", 00:22:07.867 "digest": "sha512", 00:22:07.867 "dhgroup": "ffdhe4096" 00:22:07.867 } 00:22:07.867 } 00:22:07.867 ]' 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.867 18:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.125 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:08.125 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.058 18:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.316 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.881 00:22:09.881 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.881 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.881 18:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.139 { 00:22:10.139 "cntlid": 129, 00:22:10.139 "qid": 0, 00:22:10.139 "state": "enabled", 00:22:10.139 "thread": "nvmf_tgt_poll_group_000", 00:22:10.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:10.139 "listen_address": { 00:22:10.139 "trtype": "TCP", 00:22:10.139 "adrfam": "IPv4", 00:22:10.139 "traddr": "10.0.0.2", 00:22:10.139 "trsvcid": "4420" 00:22:10.139 }, 00:22:10.139 "peer_address": { 00:22:10.139 "trtype": "TCP", 00:22:10.139 "adrfam": "IPv4", 00:22:10.139 "traddr": "10.0.0.1", 00:22:10.139 "trsvcid": "47300" 00:22:10.139 }, 00:22:10.139 "auth": { 00:22:10.139 "state": "completed", 00:22:10.139 "digest": "sha512", 00:22:10.139 "dhgroup": "ffdhe6144" 00:22:10.139 } 00:22:10.139 } 00:22:10.139 ]' 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.139 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.396 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.396 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.396 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.654 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:22:10.654 18:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:22:11.585 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.585 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:11.585 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.585 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.585 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.585 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.585 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.585 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.843 18:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.410 00:22:12.410 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.410 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.410 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.668 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.668 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.669 { 00:22:12.669 "cntlid": 131, 00:22:12.669 "qid": 0, 00:22:12.669 "state": "enabled", 00:22:12.669 "thread": "nvmf_tgt_poll_group_000", 00:22:12.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:12.669 "listen_address": { 00:22:12.669 "trtype": "TCP", 00:22:12.669 "adrfam": "IPv4", 00:22:12.669 "traddr": "10.0.0.2", 00:22:12.669 "trsvcid": "4420" 00:22:12.669 }, 00:22:12.669 "peer_address": { 00:22:12.669 "trtype": "TCP", 00:22:12.669 "adrfam": "IPv4", 00:22:12.669 "traddr": "10.0.0.1", 00:22:12.669 "trsvcid": "47322" 00:22:12.669 }, 00:22:12.669 "auth": { 00:22:12.669 "state": "completed", 00:22:12.669 "digest": "sha512", 00:22:12.669 "dhgroup": "ffdhe6144" 00:22:12.669 } 00:22:12.669 } 00:22:12.669 ]' 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.669 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.926 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:22:12.926 18:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:22:13.859 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.859 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:13.859 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.859 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.859 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.859 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.859 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:13.859 18:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.117 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.683 00:22:14.683 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.683 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.683 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.941 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.941 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.941 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.941 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.941 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.941 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.941 { 00:22:14.941 "cntlid": 133, 00:22:14.941 "qid": 0, 00:22:14.941 "state": "enabled", 00:22:14.941 "thread": "nvmf_tgt_poll_group_000", 00:22:14.941 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:14.941 "listen_address": { 00:22:14.941 "trtype": "TCP", 00:22:14.941 "adrfam": "IPv4", 00:22:14.941 "traddr": "10.0.0.2", 00:22:14.941 "trsvcid": "4420" 00:22:14.941 }, 00:22:14.941 "peer_address": { 00:22:14.941 "trtype": "TCP", 00:22:14.941 "adrfam": "IPv4", 00:22:14.941 "traddr": "10.0.0.1", 00:22:14.941 "trsvcid": "47350" 00:22:14.941 }, 00:22:14.941 "auth": { 00:22:14.941 "state": "completed", 00:22:14.941 "digest": "sha512", 00:22:14.941 "dhgroup": "ffdhe6144" 00:22:14.941 } 00:22:14.941 } 00:22:14.941 ]' 00:22:14.941 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.199 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.199 18:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.199 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.199 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.199 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.199 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.199 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.457 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:22:15.457 18:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:22:16.390 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.390 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:16.390 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.390 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.390 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.390 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.390 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:16.390 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.648 18:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.214 00:22:17.214 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.214 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.214 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.470 { 00:22:17.470 "cntlid": 135, 00:22:17.470 "qid": 0, 00:22:17.470 "state": "enabled", 00:22:17.470 "thread": "nvmf_tgt_poll_group_000", 00:22:17.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:17.470 "listen_address": { 00:22:17.470 "trtype": "TCP", 00:22:17.470 "adrfam": "IPv4", 00:22:17.470 "traddr": "10.0.0.2", 00:22:17.470 "trsvcid": "4420" 00:22:17.470 }, 00:22:17.470 "peer_address": { 00:22:17.470 "trtype": "TCP", 00:22:17.470 "adrfam": "IPv4", 00:22:17.470 "traddr": "10.0.0.1", 00:22:17.470 "trsvcid": "47386" 00:22:17.470 }, 00:22:17.470 "auth": { 00:22:17.470 "state": "completed", 00:22:17.470 "digest": "sha512", 00:22:17.470 "dhgroup": "ffdhe6144" 00:22:17.470 } 00:22:17.470 } 00:22:17.470 ]' 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.470 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.727 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.727 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.727 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.985 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:17.985 18:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:18.919 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.177 18:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.111 00:22:20.111 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.111 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.111 18:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.111 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.111 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.111 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.111 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.111 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.111 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.111 { 00:22:20.111 "cntlid": 137, 00:22:20.111 "qid": 0, 00:22:20.111 "state": "enabled", 00:22:20.111 "thread": "nvmf_tgt_poll_group_000", 00:22:20.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:20.111 "listen_address": { 00:22:20.111 "trtype": "TCP", 00:22:20.111 "adrfam": "IPv4", 00:22:20.111 "traddr": "10.0.0.2", 00:22:20.111 "trsvcid": "4420" 00:22:20.111 }, 00:22:20.111 "peer_address": { 00:22:20.111 "trtype": "TCP", 00:22:20.111 "adrfam": "IPv4", 00:22:20.111 "traddr": "10.0.0.1", 00:22:20.111 "trsvcid": "47222" 00:22:20.111 }, 00:22:20.111 "auth": { 00:22:20.111 "state": "completed", 00:22:20.111 "digest": "sha512", 00:22:20.111 "dhgroup": "ffdhe8192" 00:22:20.111 } 00:22:20.111 } 00:22:20.111 ]' 00:22:20.111 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.403 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.403 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.403 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:20.403 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.403 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.403 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.403 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.678 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:22:20.679 18:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:22:21.611 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.611 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:21.611 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.611 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.611 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.611 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.611 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.611 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.869 18:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.802 00:22:22.802 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.802 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.802 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.061 { 00:22:23.061 "cntlid": 139, 00:22:23.061 "qid": 0, 00:22:23.061 "state": "enabled", 00:22:23.061 "thread": "nvmf_tgt_poll_group_000", 00:22:23.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:23.061 "listen_address": { 00:22:23.061 "trtype": "TCP", 00:22:23.061 "adrfam": "IPv4", 00:22:23.061 "traddr": "10.0.0.2", 00:22:23.061 "trsvcid": "4420" 00:22:23.061 }, 00:22:23.061 "peer_address": { 00:22:23.061 "trtype": "TCP", 00:22:23.061 "adrfam": "IPv4", 00:22:23.061 "traddr": "10.0.0.1", 00:22:23.061 "trsvcid": "47258" 00:22:23.061 }, 00:22:23.061 "auth": { 00:22:23.061 "state": "completed", 00:22:23.061 "digest": "sha512", 00:22:23.061 "dhgroup": "ffdhe8192" 00:22:23.061 } 00:22:23.061 } 00:22:23.061 ]' 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.061 18:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.319 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:22:23.319 18:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: --dhchap-ctrl-secret DHHC-1:02:MTgyNWVmZjgyNWM1MTMxOTYwOGMxMWM5MzQxMTY5OTVkNjI2MzBmNTkxODkzY2MxCxaFmA==: 00:22:24.251 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.251 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:24.251 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.251 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.251 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.251 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.251 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:24.251 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.509 18:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.442 00:22:25.442 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.442 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.442 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.699 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.699 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.699 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.699 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.699 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.699 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.699 { 00:22:25.699 "cntlid": 141, 00:22:25.699 "qid": 0, 00:22:25.699 "state": "enabled", 00:22:25.699 "thread": "nvmf_tgt_poll_group_000", 00:22:25.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:25.699 "listen_address": { 00:22:25.699 "trtype": "TCP", 00:22:25.699 "adrfam": "IPv4", 00:22:25.699 "traddr": "10.0.0.2", 00:22:25.699 "trsvcid": "4420" 00:22:25.699 }, 00:22:25.699 "peer_address": { 00:22:25.699 "trtype": "TCP", 00:22:25.699 "adrfam": "IPv4", 00:22:25.699 "traddr": "10.0.0.1", 00:22:25.699 "trsvcid": "47302" 00:22:25.699 }, 00:22:25.699 "auth": { 00:22:25.699 "state": "completed", 00:22:25.699 "digest": "sha512", 00:22:25.699 "dhgroup": "ffdhe8192" 00:22:25.699 } 00:22:25.699 } 00:22:25.699 ]' 00:22:25.699 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.958 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.958 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.958 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.958 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.958 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.958 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.958 18:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.216 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:22:26.216 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:01:MjAyYzQyNGNiZWIwNTNjNTQ0NTRmODAwZDI4OGE2YWKq99h6: 00:22:27.146 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.146 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:27.146 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.146 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.146 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.146 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.146 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.146 18:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.403 18:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:28.333 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.333 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.333 { 00:22:28.333 "cntlid": 143, 00:22:28.333 "qid": 0, 00:22:28.333 "state": "enabled", 00:22:28.333 "thread": "nvmf_tgt_poll_group_000", 00:22:28.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:28.333 "listen_address": { 00:22:28.333 "trtype": "TCP", 00:22:28.333 "adrfam": "IPv4", 00:22:28.333 "traddr": "10.0.0.2", 00:22:28.333 "trsvcid": "4420" 00:22:28.333 }, 00:22:28.334 "peer_address": { 00:22:28.334 "trtype": "TCP", 00:22:28.334 "adrfam": "IPv4", 00:22:28.334 "traddr": "10.0.0.1", 00:22:28.334 "trsvcid": "47324" 00:22:28.334 }, 00:22:28.334 "auth": { 00:22:28.334 "state": "completed", 00:22:28.334 "digest": "sha512", 00:22:28.334 "dhgroup": "ffdhe8192" 00:22:28.334 } 00:22:28.334 } 00:22:28.334 ]' 00:22:28.334 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.590 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.590 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.590 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:28.590 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.590 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.590 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.590 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.848 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:28.848 18:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:29.777 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.033 18:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.033 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.033 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.033 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.033 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.965 00:22:30.965 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.965 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.965 18:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.222 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.222 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.222 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.222 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.222 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.222 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.222 { 00:22:31.222 "cntlid": 145, 00:22:31.222 "qid": 0, 00:22:31.222 "state": "enabled", 00:22:31.222 "thread": "nvmf_tgt_poll_group_000", 00:22:31.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:31.222 "listen_address": { 00:22:31.222 "trtype": "TCP", 00:22:31.222 "adrfam": "IPv4", 00:22:31.222 "traddr": "10.0.0.2", 00:22:31.222 "trsvcid": "4420" 00:22:31.222 }, 00:22:31.222 "peer_address": { 00:22:31.222 "trtype": "TCP", 00:22:31.222 "adrfam": "IPv4", 00:22:31.222 "traddr": "10.0.0.1", 00:22:31.222 "trsvcid": "48116" 00:22:31.222 }, 00:22:31.222 "auth": { 00:22:31.222 "state": "completed", 00:22:31.222 "digest": "sha512", 00:22:31.222 "dhgroup": "ffdhe8192" 00:22:31.222 } 00:22:31.222 } 00:22:31.222 ]' 00:22:31.222 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.479 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.479 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.480 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:31.480 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.480 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.480 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.480 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.737 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:22:31.737 18:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:OGNkOTI4NGIwNjdmMzBhNDMyNTIzNWIzNDY1MDkwZWY4ODc3YWViODRmMTc1ZGY13gmbjA==: --dhchap-ctrl-secret DHHC-1:03:MmZlN2MxNTI1NWRiOWU4ZTk5ZWM0ZmZlMTRiNGUwZmUzZGU4MGNkYTVhN2ZkMTc5MDc5MDQwODc3NjNjOTc2MMRj5PE=: 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:32.670 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:32.671 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.671 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:32.671 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:32.671 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:32.671 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:32.671 18:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:33.603 request: 00:22:33.603 { 00:22:33.603 "name": "nvme0", 00:22:33.603 "trtype": "tcp", 00:22:33.603 "traddr": "10.0.0.2", 00:22:33.603 "adrfam": "ipv4", 00:22:33.603 "trsvcid": "4420", 00:22:33.603 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:33.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:33.603 "prchk_reftag": false, 00:22:33.603 "prchk_guard": false, 00:22:33.603 "hdgst": false, 00:22:33.603 "ddgst": false, 00:22:33.603 "dhchap_key": "key2", 00:22:33.603 "allow_unrecognized_csi": false, 00:22:33.603 "method": "bdev_nvme_attach_controller", 00:22:33.603 "req_id": 1 00:22:33.603 } 00:22:33.603 Got JSON-RPC error response 00:22:33.603 response: 00:22:33.603 { 00:22:33.603 "code": -5, 00:22:33.603 "message": "Input/output error" 00:22:33.603 } 00:22:33.603 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:33.603 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:33.603 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:33.603 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:33.603 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:33.604 18:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:34.536 request: 00:22:34.536 { 00:22:34.536 "name": "nvme0", 00:22:34.536 "trtype": "tcp", 00:22:34.536 "traddr": "10.0.0.2", 00:22:34.536 "adrfam": "ipv4", 00:22:34.536 "trsvcid": "4420", 00:22:34.536 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:34.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:34.536 "prchk_reftag": false, 00:22:34.536 "prchk_guard": false, 00:22:34.536 "hdgst": false, 00:22:34.536 "ddgst": false, 00:22:34.536 "dhchap_key": "key1", 00:22:34.536 "dhchap_ctrlr_key": "ckey2", 00:22:34.536 "allow_unrecognized_csi": false, 00:22:34.536 "method": "bdev_nvme_attach_controller", 00:22:34.536 "req_id": 1 00:22:34.536 } 00:22:34.536 Got JSON-RPC error response 00:22:34.536 response: 00:22:34.536 { 00:22:34.536 "code": -5, 00:22:34.536 "message": "Input/output error" 00:22:34.536 } 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:34.536 18:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:35.100 request: 00:22:35.100 { 00:22:35.100 "name": "nvme0", 00:22:35.100 "trtype": "tcp", 00:22:35.100 "traddr": "10.0.0.2", 00:22:35.100 "adrfam": "ipv4", 00:22:35.100 "trsvcid": "4420", 00:22:35.100 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:35.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:35.100 "prchk_reftag": false, 00:22:35.100 "prchk_guard": false, 00:22:35.100 "hdgst": false, 00:22:35.100 "ddgst": false, 00:22:35.100 "dhchap_key": "key1", 00:22:35.100 "dhchap_ctrlr_key": "ckey1", 00:22:35.100 "allow_unrecognized_csi": false, 00:22:35.100 "method": "bdev_nvme_attach_controller", 00:22:35.100 "req_id": 1 00:22:35.100 } 00:22:35.100 Got JSON-RPC error response 00:22:35.100 response: 00:22:35.100 { 00:22:35.100 "code": -5, 00:22:35.100 "message": "Input/output error" 00:22:35.100 } 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 597295 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 597295 ']' 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 597295 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597295 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597295' 00:22:35.100 killing process with pid 597295 00:22:35.100 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 597295 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 597295 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=620032 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 620032 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 620032 ']' 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.358 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.923 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.923 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 620032 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 620032 ']' 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.924 18:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 null0 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jI6 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.dfj ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dfj 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.gES 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Ada ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ada 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.3Xb 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Tn5 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tn5 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ssA 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.182 18:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.556 nvme0n1 00:22:37.556 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.556 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.556 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.814 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.072 { 00:22:38.072 "cntlid": 1, 00:22:38.072 "qid": 0, 00:22:38.072 "state": "enabled", 00:22:38.072 "thread": "nvmf_tgt_poll_group_000", 00:22:38.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:38.072 "listen_address": { 00:22:38.072 "trtype": "TCP", 00:22:38.072 "adrfam": "IPv4", 00:22:38.072 "traddr": "10.0.0.2", 00:22:38.072 "trsvcid": "4420" 00:22:38.072 }, 00:22:38.072 "peer_address": { 00:22:38.072 "trtype": "TCP", 00:22:38.072 "adrfam": "IPv4", 00:22:38.072 "traddr": "10.0.0.1", 00:22:38.072 "trsvcid": "48174" 00:22:38.072 }, 00:22:38.072 "auth": { 00:22:38.072 "state": "completed", 00:22:38.072 "digest": "sha512", 00:22:38.072 "dhgroup": "ffdhe8192" 00:22:38.072 } 00:22:38.072 } 00:22:38.072 ]' 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.072 18:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.330 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:38.330 18:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:39.263 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.521 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.779 request: 00:22:39.779 { 00:22:39.779 "name": "nvme0", 00:22:39.779 "trtype": "tcp", 00:22:39.779 "traddr": "10.0.0.2", 00:22:39.779 "adrfam": "ipv4", 00:22:39.779 "trsvcid": "4420", 00:22:39.779 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:39.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:39.779 "prchk_reftag": false, 00:22:39.779 "prchk_guard": false, 00:22:39.779 "hdgst": false, 00:22:39.779 "ddgst": false, 00:22:39.779 "dhchap_key": "key3", 00:22:39.779 "allow_unrecognized_csi": false, 00:22:39.779 "method": "bdev_nvme_attach_controller", 00:22:39.779 "req_id": 1 00:22:39.779 } 00:22:39.779 Got JSON-RPC error response 00:22:39.779 response: 00:22:39.779 { 00:22:39.779 "code": -5, 00:22:39.779 "message": "Input/output error" 00:22:39.779 } 00:22:39.779 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:39.779 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:39.779 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:39.779 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:39.779 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:39.779 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:39.779 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:39.779 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:40.036 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:40.036 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:40.036 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:40.036 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:40.036 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.036 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:40.036 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.036 18:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:40.036 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.036 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.295 request: 00:22:40.295 { 00:22:40.295 "name": "nvme0", 00:22:40.295 "trtype": "tcp", 00:22:40.295 "traddr": "10.0.0.2", 00:22:40.295 "adrfam": "ipv4", 00:22:40.295 "trsvcid": "4420", 00:22:40.295 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:40.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:40.295 "prchk_reftag": false, 00:22:40.295 "prchk_guard": false, 00:22:40.295 "hdgst": false, 00:22:40.295 "ddgst": false, 00:22:40.295 "dhchap_key": "key3", 00:22:40.295 "allow_unrecognized_csi": false, 00:22:40.295 "method": "bdev_nvme_attach_controller", 00:22:40.295 "req_id": 1 00:22:40.295 } 00:22:40.295 Got JSON-RPC error response 00:22:40.295 response: 00:22:40.295 { 00:22:40.295 "code": -5, 00:22:40.295 "message": "Input/output error" 00:22:40.295 } 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:40.295 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:40.553 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:40.553 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.553 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:40.876 18:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:41.160 request: 00:22:41.160 { 00:22:41.160 "name": "nvme0", 00:22:41.160 "trtype": "tcp", 00:22:41.160 "traddr": "10.0.0.2", 00:22:41.160 "adrfam": "ipv4", 00:22:41.160 "trsvcid": "4420", 00:22:41.160 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:41.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:41.160 "prchk_reftag": false, 00:22:41.160 "prchk_guard": false, 00:22:41.160 "hdgst": false, 00:22:41.160 "ddgst": false, 00:22:41.160 "dhchap_key": "key0", 00:22:41.160 "dhchap_ctrlr_key": "key1", 00:22:41.160 "allow_unrecognized_csi": false, 00:22:41.160 "method": "bdev_nvme_attach_controller", 00:22:41.160 "req_id": 1 00:22:41.160 } 00:22:41.160 Got JSON-RPC error response 00:22:41.160 response: 00:22:41.160 { 00:22:41.160 "code": -5, 00:22:41.160 "message": "Input/output error" 00:22:41.160 } 00:22:41.160 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:41.160 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.160 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.160 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.160 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:41.160 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:41.160 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:41.725 nvme0n1 00:22:41.725 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:41.725 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:41.725 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.983 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.983 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.983 18:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.241 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:22:42.241 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.241 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.241 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.241 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:42.241 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:42.241 18:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:43.614 nvme0n1 00:22:43.614 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:43.614 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:43.614 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.872 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.872 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:43.872 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.872 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.872 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.872 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:43.872 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:43.872 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.130 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.130 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:44.131 18:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: --dhchap-ctrl-secret DHHC-1:03:NzhjYzJhNTg0MDNmZjlmMWI2ZWJlNWI2N2UwYjk4YmJkMDZkMDIxYTBjMTNjOThiZGU0OWEwYmJjMzBhYzM1NYTDK0c=: 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.065 18:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:45.323 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:46.254 request: 00:22:46.255 { 00:22:46.255 "name": "nvme0", 00:22:46.255 "trtype": "tcp", 00:22:46.255 "traddr": "10.0.0.2", 00:22:46.255 "adrfam": "ipv4", 00:22:46.255 "trsvcid": "4420", 00:22:46.255 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:46.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:22:46.255 "prchk_reftag": false, 00:22:46.255 "prchk_guard": false, 00:22:46.255 "hdgst": false, 00:22:46.255 "ddgst": false, 00:22:46.255 "dhchap_key": "key1", 00:22:46.255 "allow_unrecognized_csi": false, 00:22:46.255 "method": "bdev_nvme_attach_controller", 00:22:46.255 "req_id": 1 00:22:46.255 } 00:22:46.255 Got JSON-RPC error response 00:22:46.255 response: 00:22:46.255 { 00:22:46.255 "code": -5, 00:22:46.255 "message": "Input/output error" 00:22:46.255 } 00:22:46.255 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:46.255 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:46.255 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:46.255 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:46.255 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:46.255 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:46.255 18:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:47.630 nvme0n1 00:22:47.630 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:47.630 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:47.630 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.630 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.630 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.630 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.891 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:47.891 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.891 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.149 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.149 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:48.149 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:48.149 18:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:48.407 nvme0n1 00:22:48.407 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:48.407 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:48.407 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.665 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.665 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.665 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: '' 2s 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: ]] 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjRhYmM5ODMzNTI0OTVjOTExNTQ5MDkwNmJhOGY1MmaT0zaw: 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:48.923 18:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: 2s 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:50.822 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: 00:22:51.080 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:51.080 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:51.080 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:51.080 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: ]] 00:22:51.080 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YjlmMTgzMThmNjg0MDhkYWExMTYyZDA2YTY2YTE3NzgxN2U5NTg5YWY3YTU4NWE5diPgMQ==: 00:22:51.080 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:51.080 18:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:52.980 18:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:54.350 nvme0n1 00:22:54.351 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:54.351 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.351 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.351 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.351 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:54.351 18:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:55.285 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:55.285 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:55.285 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.544 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.544 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:55.544 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.544 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.544 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.544 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:55.544 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:55.802 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:55.802 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:55.802 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:56.060 18:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:56.992 request: 00:22:56.992 { 00:22:56.992 "name": "nvme0", 00:22:56.992 "dhchap_key": "key1", 00:22:56.992 "dhchap_ctrlr_key": "key3", 00:22:56.992 "method": "bdev_nvme_set_keys", 00:22:56.992 "req_id": 1 00:22:56.992 } 00:22:56.992 Got JSON-RPC error response 00:22:56.992 response: 00:22:56.992 { 00:22:56.992 "code": -13, 00:22:56.992 "message": "Permission denied" 00:22:56.992 } 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:56.992 18:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:58.364 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:58.364 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:58.364 18:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.364 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:58.364 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:58.364 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.364 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.364 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.364 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.364 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.364 18:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.738 nvme0n1 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:59.738 18:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:00.670 request: 00:23:00.670 { 00:23:00.670 "name": "nvme0", 00:23:00.670 "dhchap_key": "key2", 00:23:00.670 "dhchap_ctrlr_key": "key0", 00:23:00.670 "method": "bdev_nvme_set_keys", 00:23:00.670 "req_id": 1 00:23:00.670 } 00:23:00.670 Got JSON-RPC error response 00:23:00.670 response: 00:23:00.670 { 00:23:00.670 "code": -13, 00:23:00.670 "message": "Permission denied" 00:23:00.670 } 00:23:00.670 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:00.670 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.670 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.670 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.670 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:00.670 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:00.671 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.928 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:00.928 18:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:01.861 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:01.861 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:01.861 18:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 597407 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 597407 ']' 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 597407 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597407 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597407' 00:23:02.119 killing process with pid 597407 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 597407 00:23:02.119 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 597407 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:02.685 rmmod nvme_tcp 00:23:02.685 rmmod nvme_fabrics 00:23:02.685 rmmod nvme_keyring 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 620032 ']' 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 620032 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 620032 ']' 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 620032 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.685 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 620032 00:23:02.686 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.686 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.686 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 620032' 00:23:02.686 killing process with pid 620032 00:23:02.686 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 620032 00:23:02.686 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 620032 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.945 18:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jI6 /tmp/spdk.key-sha256.gES /tmp/spdk.key-sha384.3Xb /tmp/spdk.key-sha512.ssA /tmp/spdk.key-sha512.dfj /tmp/spdk.key-sha384.Ada /tmp/spdk.key-sha256.Tn5 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:04.911 00:23:04.911 real 3m30.284s 00:23:04.911 user 8m13.912s 00:23:04.911 sys 0m27.500s 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.911 ************************************ 00:23:04.911 END TEST nvmf_auth_target 00:23:04.911 ************************************ 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:04.911 ************************************ 00:23:04.911 START TEST nvmf_bdevio_no_huge 00:23:04.911 ************************************ 00:23:04.911 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:05.169 * Looking for test storage... 00:23:05.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:05.169 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:05.169 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:23:05.169 18:18:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:05.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.169 --rc genhtml_branch_coverage=1 00:23:05.169 --rc genhtml_function_coverage=1 00:23:05.169 --rc genhtml_legend=1 00:23:05.169 --rc geninfo_all_blocks=1 00:23:05.169 --rc geninfo_unexecuted_blocks=1 00:23:05.169 00:23:05.169 ' 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:05.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.169 --rc genhtml_branch_coverage=1 00:23:05.169 --rc genhtml_function_coverage=1 00:23:05.169 --rc genhtml_legend=1 00:23:05.169 --rc geninfo_all_blocks=1 00:23:05.169 --rc geninfo_unexecuted_blocks=1 00:23:05.169 00:23:05.169 ' 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:05.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.169 --rc genhtml_branch_coverage=1 00:23:05.169 --rc genhtml_function_coverage=1 00:23:05.169 --rc genhtml_legend=1 00:23:05.169 --rc geninfo_all_blocks=1 00:23:05.169 --rc geninfo_unexecuted_blocks=1 00:23:05.169 00:23:05.169 ' 00:23:05.169 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:05.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.169 --rc genhtml_branch_coverage=1 00:23:05.170 --rc genhtml_function_coverage=1 00:23:05.170 --rc genhtml_legend=1 00:23:05.170 --rc geninfo_all_blocks=1 00:23:05.170 --rc geninfo_unexecuted_blocks=1 00:23:05.170 00:23:05.170 ' 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:05.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:05.170 18:18:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:07.709 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:07.709 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:07.709 Found net devices under 0000:09:00.0: cvl_0_0 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:07.709 Found net devices under 0000:09:00.1: cvl_0_1 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.709 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:23:07.710 00:23:07.710 --- 10.0.0.2 ping statistics --- 00:23:07.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.710 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:23:07.710 00:23:07.710 --- 10.0.0.1 ping statistics --- 00:23:07.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.710 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=625297 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 625297 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 625297 ']' 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 [2024-11-26 18:18:55.381223] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:07.710 [2024-11-26 18:18:55.381328] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:07.710 [2024-11-26 18:18:55.458431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.710 [2024-11-26 18:18:55.520346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.710 [2024-11-26 18:18:55.520405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.710 [2024-11-26 18:18:55.520419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.710 [2024-11-26 18:18:55.520430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.710 [2024-11-26 18:18:55.520441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.710 [2024-11-26 18:18:55.521473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:07.710 [2024-11-26 18:18:55.521538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:07.710 [2024-11-26 18:18:55.521587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:07.710 [2024-11-26 18:18:55.521590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 [2024-11-26 18:18:55.678128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 Malloc0 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.710 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:07.710 [2024-11-26 18:18:55.716366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:07.969 { 00:23:07.969 "params": { 00:23:07.969 "name": "Nvme$subsystem", 00:23:07.969 "trtype": "$TEST_TRANSPORT", 00:23:07.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.969 "adrfam": "ipv4", 00:23:07.969 "trsvcid": "$NVMF_PORT", 00:23:07.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.969 "hdgst": ${hdgst:-false}, 00:23:07.969 "ddgst": ${ddgst:-false} 00:23:07.969 }, 00:23:07.969 "method": "bdev_nvme_attach_controller" 00:23:07.969 } 00:23:07.969 EOF 00:23:07.969 )") 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:07.969 18:18:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:07.969 "params": { 00:23:07.969 "name": "Nvme1", 00:23:07.969 "trtype": "tcp", 00:23:07.969 "traddr": "10.0.0.2", 00:23:07.969 "adrfam": "ipv4", 00:23:07.969 "trsvcid": "4420", 00:23:07.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.969 "hdgst": false, 00:23:07.969 "ddgst": false 00:23:07.969 }, 00:23:07.969 "method": "bdev_nvme_attach_controller" 00:23:07.969 }' 00:23:07.969 [2024-11-26 18:18:55.767464] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:07.969 [2024-11-26 18:18:55.767544] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid625328 ] 00:23:07.969 [2024-11-26 18:18:55.839202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:07.969 [2024-11-26 18:18:55.905361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.969 [2024-11-26 18:18:55.905414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.969 [2024-11-26 18:18:55.905419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.226 I/O targets: 00:23:08.226 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:08.226 00:23:08.226 00:23:08.226 CUnit - A unit testing framework for C - Version 2.1-3 00:23:08.226 http://cunit.sourceforge.net/ 00:23:08.226 00:23:08.226 00:23:08.226 Suite: bdevio tests on: Nvme1n1 00:23:08.226 Test: blockdev write read block ...passed 00:23:08.226 Test: blockdev write zeroes read block ...passed 00:23:08.226 Test: blockdev write zeroes read no split ...passed 00:23:08.226 Test: blockdev write zeroes read split ...passed 00:23:08.226 Test: blockdev write zeroes read split partial ...passed 00:23:08.226 Test: blockdev reset ...[2024-11-26 18:18:56.220701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:08.226 [2024-11-26 18:18:56.220809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103e6a0 (9): Bad file descriptor 00:23:08.483 [2024-11-26 18:18:56.290444] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:08.483 passed 00:23:08.483 Test: blockdev write read 8 blocks ...passed 00:23:08.483 Test: blockdev write read size > 128k ...passed 00:23:08.483 Test: blockdev write read invalid size ...passed 00:23:08.483 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:08.483 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:08.483 Test: blockdev write read max offset ...passed 00:23:08.483 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:08.483 Test: blockdev writev readv 8 blocks ...passed 00:23:08.483 Test: blockdev writev readv 30 x 1block ...passed 00:23:08.741 Test: blockdev writev readv block ...passed 00:23:08.741 Test: blockdev writev readv size > 128k ...passed 00:23:08.741 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:08.741 Test: blockdev comparev and writev ...[2024-11-26 18:18:56.503686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.741 [2024-11-26 18:18:56.503722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.503746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.741 [2024-11-26 18:18:56.503765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.504074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.741 [2024-11-26 18:18:56.504101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.504124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.741 [2024-11-26 18:18:56.504141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.504450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.741 [2024-11-26 18:18:56.504476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.504498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.741 [2024-11-26 18:18:56.504516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.504814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.741 [2024-11-26 18:18:56.504839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.504861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:08.741 [2024-11-26 18:18:56.504877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:08.741 passed 00:23:08.741 Test: blockdev nvme passthru rw ...passed 00:23:08.741 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:18:56.586551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.741 [2024-11-26 18:18:56.586579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.586735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.741 [2024-11-26 18:18:56.586760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.586907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.741 [2024-11-26 18:18:56.586937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:08.741 [2024-11-26 18:18:56.587083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:08.741 [2024-11-26 18:18:56.587107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:08.741 passed 00:23:08.741 Test: blockdev nvme admin passthru ...passed 00:23:08.741 Test: blockdev copy ...passed 00:23:08.741 00:23:08.741 Run Summary: Type Total Ran Passed Failed Inactive 00:23:08.741 suites 1 1 n/a 0 0 00:23:08.741 tests 23 23 23 0 0 00:23:08.741 asserts 152 152 152 0 n/a 00:23:08.741 00:23:08.741 Elapsed time = 1.082 seconds 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:08.999 18:18:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:08.999 rmmod nvme_tcp 00:23:08.999 rmmod nvme_fabrics 00:23:09.255 rmmod nvme_keyring 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 625297 ']' 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 625297 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 625297 ']' 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 625297 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 625297 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 625297' 00:23:09.255 killing process with pid 625297 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 625297 00:23:09.255 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 625297 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.513 18:18:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:12.050 00:23:12.050 real 0m6.600s 00:23:12.050 user 0m10.292s 00:23:12.050 sys 0m2.670s 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.050 ************************************ 00:23:12.050 END TEST nvmf_bdevio_no_huge 00:23:12.050 ************************************ 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:12.050 ************************************ 00:23:12.050 START TEST nvmf_tls 00:23:12.050 ************************************ 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:12.050 * Looking for test storage... 00:23:12.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:12.050 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.051 --rc genhtml_branch_coverage=1 00:23:12.051 --rc genhtml_function_coverage=1 00:23:12.051 --rc genhtml_legend=1 00:23:12.051 --rc geninfo_all_blocks=1 00:23:12.051 --rc geninfo_unexecuted_blocks=1 00:23:12.051 00:23:12.051 ' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.051 --rc genhtml_branch_coverage=1 00:23:12.051 --rc genhtml_function_coverage=1 00:23:12.051 --rc genhtml_legend=1 00:23:12.051 --rc geninfo_all_blocks=1 00:23:12.051 --rc geninfo_unexecuted_blocks=1 00:23:12.051 00:23:12.051 ' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.051 --rc genhtml_branch_coverage=1 00:23:12.051 --rc genhtml_function_coverage=1 00:23:12.051 --rc genhtml_legend=1 00:23:12.051 --rc geninfo_all_blocks=1 00:23:12.051 --rc geninfo_unexecuted_blocks=1 00:23:12.051 00:23:12.051 ' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:12.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.051 --rc genhtml_branch_coverage=1 00:23:12.051 --rc genhtml_function_coverage=1 00:23:12.051 --rc genhtml_legend=1 00:23:12.051 --rc geninfo_all_blocks=1 00:23:12.051 --rc geninfo_unexecuted_blocks=1 00:23:12.051 00:23:12.051 ' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:12.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.051 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.052 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:12.052 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:12.052 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:12.052 18:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:13.958 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:13.958 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:13.958 Found net devices under 0000:09:00.0: cvl_0_0 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:13.958 Found net devices under 0000:09:00.1: cvl_0_1 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.958 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.959 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:23:14.218 00:23:14.218 --- 10.0.0.2 ping statistics --- 00:23:14.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.218 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:23:14.218 00:23:14.218 --- 10.0.0.1 ping statistics --- 00:23:14.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.218 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:14.218 18:19:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=627521 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 627521 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 627521 ']' 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.218 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.218 [2024-11-26 18:19:02.066196] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:14.218 [2024-11-26 18:19:02.066275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.218 [2024-11-26 18:19:02.142161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.218 [2024-11-26 18:19:02.200701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.218 [2024-11-26 18:19:02.200757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.218 [2024-11-26 18:19:02.200771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.218 [2024-11-26 18:19:02.200782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.218 [2024-11-26 18:19:02.200792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.218 [2024-11-26 18:19:02.201449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.477 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.477 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:14.477 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:14.477 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.477 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.477 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.477 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:14.477 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:14.735 true 00:23:14.735 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:14.735 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:14.993 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:14.993 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:14.993 18:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:15.251 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:15.251 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:15.509 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:15.509 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:15.509 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:15.767 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:15.767 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:16.026 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:16.026 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:16.026 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:16.026 18:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:16.285 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:16.285 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:16.285 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:16.852 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:16.852 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:16.852 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:16.852 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:16.852 18:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:17.110 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:17.110 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Rkp8QaWXZZ 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.WLhSMjb7Ar 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:17.677 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Rkp8QaWXZZ 00:23:17.678 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.WLhSMjb7Ar 00:23:17.678 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:17.936 18:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:18.194 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Rkp8QaWXZZ 00:23:18.194 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Rkp8QaWXZZ 00:23:18.194 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:18.453 [2024-11-26 18:19:06.423074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.453 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:18.712 18:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:19.279 [2024-11-26 18:19:07.012681] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.279 [2024-11-26 18:19:07.012904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.279 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:19.538 malloc0 00:23:19.538 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:19.796 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Rkp8QaWXZZ 00:23:20.054 18:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.312 18:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Rkp8QaWXZZ 00:23:32.511 Initializing NVMe Controllers 00:23:32.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:32.511 Initialization complete. Launching workers. 00:23:32.511 ======================================================== 00:23:32.511 Latency(us) 00:23:32.511 Device Information : IOPS MiB/s Average min max 00:23:32.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8632.87 33.72 7415.51 1119.08 9463.70 00:23:32.511 ======================================================== 00:23:32.511 Total : 8632.87 33.72 7415.51 1119.08 9463.70 00:23:32.511 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Rkp8QaWXZZ 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Rkp8QaWXZZ 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=629423 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 629423 /var/tmp/bdevperf.sock 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 629423 ']' 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.511 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.512 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.512 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.512 [2024-11-26 18:19:18.393670] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:32.512 [2024-11-26 18:19:18.393768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid629423 ] 00:23:32.512 [2024-11-26 18:19:18.459336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.512 [2024-11-26 18:19:18.517148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.512 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.512 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.512 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Rkp8QaWXZZ 00:23:32.512 18:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.512 [2024-11-26 18:19:19.159435] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.512 TLSTESTn1 00:23:32.512 18:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:32.512 Running I/O for 10 seconds... 00:23:33.445 3424.00 IOPS, 13.38 MiB/s [2024-11-26T17:19:22.453Z] 3466.00 IOPS, 13.54 MiB/s [2024-11-26T17:19:23.388Z] 3494.33 IOPS, 13.65 MiB/s [2024-11-26T17:19:24.760Z] 3502.25 IOPS, 13.68 MiB/s [2024-11-26T17:19:25.691Z] 3509.80 IOPS, 13.71 MiB/s [2024-11-26T17:19:26.624Z] 3526.83 IOPS, 13.78 MiB/s [2024-11-26T17:19:27.557Z] 3539.71 IOPS, 13.83 MiB/s [2024-11-26T17:19:28.489Z] 3551.25 IOPS, 13.87 MiB/s [2024-11-26T17:19:29.422Z] 3547.78 IOPS, 13.86 MiB/s [2024-11-26T17:19:29.423Z] 3535.50 IOPS, 13.81 MiB/s 00:23:41.412 Latency(us) 00:23:41.412 [2024-11-26T17:19:29.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.412 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:41.412 Verification LBA range: start 0x0 length 0x2000 00:23:41.412 TLSTESTn1 : 10.02 3540.58 13.83 0.00 0.00 36089.41 5801.15 43302.31 00:23:41.412 [2024-11-26T17:19:29.423Z] =================================================================================================================== 00:23:41.412 [2024-11-26T17:19:29.423Z] Total : 3540.58 13.83 0.00 0.00 36089.41 5801.15 43302.31 00:23:41.412 { 00:23:41.412 "results": [ 00:23:41.412 { 00:23:41.412 "job": "TLSTESTn1", 00:23:41.412 "core_mask": "0x4", 00:23:41.412 "workload": "verify", 00:23:41.412 "status": "finished", 00:23:41.412 "verify_range": { 00:23:41.412 "start": 0, 00:23:41.412 "length": 8192 00:23:41.412 }, 00:23:41.412 "queue_depth": 128, 00:23:41.412 "io_size": 4096, 00:23:41.412 "runtime": 10.020405, 00:23:41.412 "iops": 3540.5754557824757, 00:23:41.412 "mibps": 13.830372874150296, 00:23:41.412 "io_failed": 0, 00:23:41.412 "io_timeout": 0, 00:23:41.412 "avg_latency_us": 36089.40613980913, 00:23:41.412 "min_latency_us": 5801.14962962963, 00:23:41.412 "max_latency_us": 43302.305185185185 00:23:41.412 } 00:23:41.412 ], 00:23:41.412 "core_count": 1 00:23:41.412 } 00:23:41.412 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.412 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 629423 00:23:41.412 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 629423 ']' 00:23:41.412 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 629423 00:23:41.412 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:41.412 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.412 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629423 00:23:41.670 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:41.670 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:41.670 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629423' 00:23:41.670 killing process with pid 629423 00:23:41.670 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 629423 00:23:41.670 Received shutdown signal, test time was about 10.000000 seconds 00:23:41.670 00:23:41.670 Latency(us) 00:23:41.670 [2024-11-26T17:19:29.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.670 [2024-11-26T17:19:29.681Z] =================================================================================================================== 00:23:41.670 [2024-11-26T17:19:29.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.670 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 629423 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WLhSMjb7Ar 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WLhSMjb7Ar 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WLhSMjb7Ar 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.WLhSMjb7Ar 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=630765 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 630765 /var/tmp/bdevperf.sock 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 630765 ']' 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.928 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.928 [2024-11-26 18:19:29.732772] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:41.928 [2024-11-26 18:19:29.732873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630765 ] 00:23:41.928 [2024-11-26 18:19:29.799446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.928 [2024-11-26 18:19:29.855389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.186 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.186 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.186 18:19:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WLhSMjb7Ar 00:23:42.444 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.701 [2024-11-26 18:19:30.524787] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.701 [2024-11-26 18:19:30.530505] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:42.701 [2024-11-26 18:19:30.531023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2d2f0 (107): Transport endpoint is not connected 00:23:42.701 [2024-11-26 18:19:30.532014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2d2f0 (9): Bad file descriptor 00:23:42.701 [2024-11-26 18:19:30.533013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:42.701 [2024-11-26 18:19:30.533034] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:42.701 [2024-11-26 18:19:30.533055] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:42.701 [2024-11-26 18:19:30.533068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:42.701 request: 00:23:42.701 { 00:23:42.701 "name": "TLSTEST", 00:23:42.701 "trtype": "tcp", 00:23:42.701 "traddr": "10.0.0.2", 00:23:42.701 "adrfam": "ipv4", 00:23:42.701 "trsvcid": "4420", 00:23:42.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.701 "prchk_reftag": false, 00:23:42.701 "prchk_guard": false, 00:23:42.701 "hdgst": false, 00:23:42.701 "ddgst": false, 00:23:42.701 "psk": "key0", 00:23:42.701 "allow_unrecognized_csi": false, 00:23:42.701 "method": "bdev_nvme_attach_controller", 00:23:42.701 "req_id": 1 00:23:42.701 } 00:23:42.701 Got JSON-RPC error response 00:23:42.701 response: 00:23:42.701 { 00:23:42.701 "code": -5, 00:23:42.701 "message": "Input/output error" 00:23:42.701 } 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 630765 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 630765 ']' 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 630765 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 630765 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 630765' 00:23:42.701 killing process with pid 630765 00:23:42.701 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 630765 00:23:42.701 Received shutdown signal, test time was about 10.000000 seconds 00:23:42.701 00:23:42.701 Latency(us) 00:23:42.701 [2024-11-26T17:19:30.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.701 [2024-11-26T17:19:30.713Z] =================================================================================================================== 00:23:42.702 [2024-11-26T17:19:30.713Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:42.702 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 630765 00:23:42.968 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:42.968 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:42.968 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:42.968 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:42.968 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Rkp8QaWXZZ 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Rkp8QaWXZZ 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Rkp8QaWXZZ 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Rkp8QaWXZZ 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=630905 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 630905 /var/tmp/bdevperf.sock 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 630905 ']' 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.969 18:19:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.969 [2024-11-26 18:19:30.833173] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:42.969 [2024-11-26 18:19:30.833273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid630905 ] 00:23:42.969 [2024-11-26 18:19:30.901468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.969 [2024-11-26 18:19:30.962753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.227 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.227 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:43.227 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Rkp8QaWXZZ 00:23:43.484 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:43.743 [2024-11-26 18:19:31.614418] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.743 [2024-11-26 18:19:31.622721] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:43.743 [2024-11-26 18:19:31.622751] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:43.743 [2024-11-26 18:19:31.622815] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:43.743 [2024-11-26 18:19:31.623460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbe2f0 (107): Transport endpoint is not connected 00:23:43.743 [2024-11-26 18:19:31.624451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbe2f0 (9): Bad file descriptor 00:23:43.743 [2024-11-26 18:19:31.625450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:43.743 [2024-11-26 18:19:31.625471] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:43.743 [2024-11-26 18:19:31.625484] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:43.743 [2024-11-26 18:19:31.625499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:43.743 request: 00:23:43.743 { 00:23:43.743 "name": "TLSTEST", 00:23:43.743 "trtype": "tcp", 00:23:43.743 "traddr": "10.0.0.2", 00:23:43.743 "adrfam": "ipv4", 00:23:43.743 "trsvcid": "4420", 00:23:43.743 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.743 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:43.743 "prchk_reftag": false, 00:23:43.743 "prchk_guard": false, 00:23:43.743 "hdgst": false, 00:23:43.743 "ddgst": false, 00:23:43.743 "psk": "key0", 00:23:43.743 "allow_unrecognized_csi": false, 00:23:43.743 "method": "bdev_nvme_attach_controller", 00:23:43.743 "req_id": 1 00:23:43.743 } 00:23:43.743 Got JSON-RPC error response 00:23:43.743 response: 00:23:43.743 { 00:23:43.743 "code": -5, 00:23:43.743 "message": "Input/output error" 00:23:43.743 } 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 630905 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 630905 ']' 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 630905 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 630905 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 630905' 00:23:43.743 killing process with pid 630905 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 630905 00:23:43.743 Received shutdown signal, test time was about 10.000000 seconds 00:23:43.743 00:23:43.743 Latency(us) 00:23:43.743 [2024-11-26T17:19:31.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.743 [2024-11-26T17:19:31.754Z] =================================================================================================================== 00:23:43.743 [2024-11-26T17:19:31.754Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:43.743 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 630905 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Rkp8QaWXZZ 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Rkp8QaWXZZ 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Rkp8QaWXZZ 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Rkp8QaWXZZ 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=631055 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 631055 /var/tmp/bdevperf.sock 00:23:44.001 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 631055 ']' 00:23:44.002 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.002 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.002 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.002 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.002 18:19:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.002 [2024-11-26 18:19:31.955930] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:44.002 [2024-11-26 18:19:31.956038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631055 ] 00:23:44.260 [2024-11-26 18:19:32.024202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.260 [2024-11-26 18:19:32.082827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.260 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.260 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.260 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Rkp8QaWXZZ 00:23:44.517 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.775 [2024-11-26 18:19:32.730235] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.775 [2024-11-26 18:19:32.736003] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:44.775 [2024-11-26 18:19:32.736035] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:44.775 [2024-11-26 18:19:32.736091] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:44.775 [2024-11-26 18:19:32.736577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe772f0 (107): Transport endpoint is not connected 00:23:44.775 [2024-11-26 18:19:32.737564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe772f0 (9): Bad file descriptor 00:23:44.775 [2024-11-26 18:19:32.738564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:44.775 [2024-11-26 18:19:32.738586] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:44.775 [2024-11-26 18:19:32.738614] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:44.775 [2024-11-26 18:19:32.738629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:44.775 request: 00:23:44.775 { 00:23:44.775 "name": "TLSTEST", 00:23:44.775 "trtype": "tcp", 00:23:44.775 "traddr": "10.0.0.2", 00:23:44.775 "adrfam": "ipv4", 00:23:44.775 "trsvcid": "4420", 00:23:44.775 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.775 "prchk_reftag": false, 00:23:44.775 "prchk_guard": false, 00:23:44.775 "hdgst": false, 00:23:44.775 "ddgst": false, 00:23:44.775 "psk": "key0", 00:23:44.775 "allow_unrecognized_csi": false, 00:23:44.775 "method": "bdev_nvme_attach_controller", 00:23:44.775 "req_id": 1 00:23:44.775 } 00:23:44.775 Got JSON-RPC error response 00:23:44.775 response: 00:23:44.775 { 00:23:44.775 "code": -5, 00:23:44.775 "message": "Input/output error" 00:23:44.775 } 00:23:44.775 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 631055 00:23:44.775 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 631055 ']' 00:23:44.775 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 631055 00:23:44.775 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.775 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.775 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 631055 00:23:45.034 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:45.034 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:45.034 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 631055' 00:23:45.034 killing process with pid 631055 00:23:45.034 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 631055 00:23:45.034 Received shutdown signal, test time was about 10.000000 seconds 00:23:45.034 00:23:45.034 Latency(us) 00:23:45.034 [2024-11-26T17:19:33.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.034 [2024-11-26T17:19:33.045Z] =================================================================================================================== 00:23:45.034 [2024-11-26T17:19:33.045Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:45.034 18:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 631055 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=631192 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 631192 /var/tmp/bdevperf.sock 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 631192 ']' 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.034 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.291 [2024-11-26 18:19:33.074679] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:45.291 [2024-11-26 18:19:33.074781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631192 ] 00:23:45.291 [2024-11-26 18:19:33.141525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.291 [2024-11-26 18:19:33.196578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.549 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.549 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:45.549 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:45.807 [2024-11-26 18:19:33.564325] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:45.807 [2024-11-26 18:19:33.564383] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:45.807 request: 00:23:45.807 { 00:23:45.807 "name": "key0", 00:23:45.807 "path": "", 00:23:45.807 "method": "keyring_file_add_key", 00:23:45.807 "req_id": 1 00:23:45.807 } 00:23:45.807 Got JSON-RPC error response 00:23:45.807 response: 00:23:45.807 { 00:23:45.807 "code": -1, 00:23:45.807 "message": "Operation not permitted" 00:23:45.807 } 00:23:45.807 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:46.065 [2024-11-26 18:19:33.841189] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.065 [2024-11-26 18:19:33.841261] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:46.065 request: 00:23:46.065 { 00:23:46.065 "name": "TLSTEST", 00:23:46.065 "trtype": "tcp", 00:23:46.065 "traddr": "10.0.0.2", 00:23:46.065 "adrfam": "ipv4", 00:23:46.065 "trsvcid": "4420", 00:23:46.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:46.065 "prchk_reftag": false, 00:23:46.065 "prchk_guard": false, 00:23:46.065 "hdgst": false, 00:23:46.065 "ddgst": false, 00:23:46.065 "psk": "key0", 00:23:46.065 "allow_unrecognized_csi": false, 00:23:46.065 "method": "bdev_nvme_attach_controller", 00:23:46.065 "req_id": 1 00:23:46.065 } 00:23:46.065 Got JSON-RPC error response 00:23:46.065 response: 00:23:46.065 { 00:23:46.065 "code": -126, 00:23:46.065 "message": "Required key not available" 00:23:46.065 } 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 631192 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 631192 ']' 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 631192 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 631192 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 631192' 00:23:46.066 killing process with pid 631192 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 631192 00:23:46.066 Received shutdown signal, test time was about 10.000000 seconds 00:23:46.066 00:23:46.066 Latency(us) 00:23:46.066 [2024-11-26T17:19:34.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.066 [2024-11-26T17:19:34.077Z] =================================================================================================================== 00:23:46.066 [2024-11-26T17:19:34.077Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:46.066 18:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 631192 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 627521 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 627521 ']' 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 627521 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 627521 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 627521' 00:23:46.324 killing process with pid 627521 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 627521 00:23:46.324 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 627521 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.JSIqjWZCxi 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.JSIqjWZCxi 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:46.582 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=631343 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 631343 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 631343 ']' 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.583 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.583 [2024-11-26 18:19:34.520498] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:46.583 [2024-11-26 18:19:34.520618] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.583 [2024-11-26 18:19:34.591256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.841 [2024-11-26 18:19:34.644157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.841 [2024-11-26 18:19:34.644229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.841 [2024-11-26 18:19:34.644252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.841 [2024-11-26 18:19:34.644263] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.841 [2024-11-26 18:19:34.644273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.841 [2024-11-26 18:19:34.644893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.JSIqjWZCxi 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JSIqjWZCxi 00:23:46.841 18:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:47.098 [2024-11-26 18:19:35.039400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.098 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:47.356 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:47.614 [2024-11-26 18:19:35.576891] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.614 [2024-11-26 18:19:35.577137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.614 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:47.872 malloc0 00:23:47.872 18:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:48.436 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:23:48.436 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JSIqjWZCxi 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JSIqjWZCxi 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=631628 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 631628 /var/tmp/bdevperf.sock 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 631628 ']' 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.694 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.952 [2024-11-26 18:19:36.714917] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:23:48.952 [2024-11-26 18:19:36.715012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631628 ] 00:23:48.952 [2024-11-26 18:19:36.783078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.952 [2024-11-26 18:19:36.841857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.952 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.952 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.952 18:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:23:49.518 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.518 [2024-11-26 18:19:37.477697] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.776 TLSTESTn1 00:23:49.776 18:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:49.776 Running I/O for 10 seconds... 00:23:52.077 3454.00 IOPS, 13.49 MiB/s [2024-11-26T17:19:41.028Z] 3481.50 IOPS, 13.60 MiB/s [2024-11-26T17:19:41.956Z] 3517.00 IOPS, 13.74 MiB/s [2024-11-26T17:19:42.920Z] 3520.75 IOPS, 13.75 MiB/s [2024-11-26T17:19:43.865Z] 3524.40 IOPS, 13.77 MiB/s [2024-11-26T17:19:44.796Z] 3530.00 IOPS, 13.79 MiB/s [2024-11-26T17:19:45.729Z] 3503.86 IOPS, 13.69 MiB/s [2024-11-26T17:19:47.101Z] 3514.50 IOPS, 13.73 MiB/s [2024-11-26T17:19:48.032Z] 3507.67 IOPS, 13.70 MiB/s [2024-11-26T17:19:48.032Z] 3515.40 IOPS, 13.73 MiB/s 00:24:00.021 Latency(us) 00:24:00.021 [2024-11-26T17:19:48.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.021 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:00.021 Verification LBA range: start 0x0 length 0x2000 00:24:00.021 TLSTESTn1 : 10.02 3520.66 13.75 0.00 0.00 36293.89 8641.04 40001.23 00:24:00.021 [2024-11-26T17:19:48.032Z] =================================================================================================================== 00:24:00.021 [2024-11-26T17:19:48.032Z] Total : 3520.66 13.75 0.00 0.00 36293.89 8641.04 40001.23 00:24:00.021 { 00:24:00.021 "results": [ 00:24:00.021 { 00:24:00.021 "job": "TLSTESTn1", 00:24:00.021 "core_mask": "0x4", 00:24:00.021 "workload": "verify", 00:24:00.021 "status": "finished", 00:24:00.021 "verify_range": { 00:24:00.021 "start": 0, 00:24:00.021 "length": 8192 00:24:00.021 }, 00:24:00.021 "queue_depth": 128, 00:24:00.021 "io_size": 4096, 00:24:00.021 "runtime": 10.021146, 00:24:00.021 "iops": 3520.655222466572, 00:24:00.021 "mibps": 13.752559462760047, 00:24:00.021 "io_failed": 0, 00:24:00.021 "io_timeout": 0, 00:24:00.021 "avg_latency_us": 36293.89386099118, 00:24:00.021 "min_latency_us": 8641.042962962963, 00:24:00.021 "max_latency_us": 40001.23259259259 00:24:00.021 } 00:24:00.021 ], 00:24:00.022 "core_count": 1 00:24:00.022 } 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 631628 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 631628 ']' 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 631628 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 631628 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 631628' 00:24:00.022 killing process with pid 631628 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 631628 00:24:00.022 Received shutdown signal, test time was about 10.000000 seconds 00:24:00.022 00:24:00.022 Latency(us) 00:24:00.022 [2024-11-26T17:19:48.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.022 [2024-11-26T17:19:48.033Z] =================================================================================================================== 00:24:00.022 [2024-11-26T17:19:48.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.022 18:19:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 631628 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.JSIqjWZCxi 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JSIqjWZCxi 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JSIqjWZCxi 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.JSIqjWZCxi 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.JSIqjWZCxi 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=632969 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 632969 /var/tmp/bdevperf.sock 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 632969 ']' 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.022 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.280 [2024-11-26 18:19:48.061949] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:00.280 [2024-11-26 18:19:48.062044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid632969 ] 00:24:00.280 [2024-11-26 18:19:48.127973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.280 [2024-11-26 18:19:48.184229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.537 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.537 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:00.537 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:24:00.795 [2024-11-26 18:19:48.563868] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JSIqjWZCxi': 0100666 00:24:00.795 [2024-11-26 18:19:48.563904] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:00.795 request: 00:24:00.795 { 00:24:00.795 "name": "key0", 00:24:00.795 "path": "/tmp/tmp.JSIqjWZCxi", 00:24:00.795 "method": "keyring_file_add_key", 00:24:00.795 "req_id": 1 00:24:00.795 } 00:24:00.795 Got JSON-RPC error response 00:24:00.795 response: 00:24:00.795 { 00:24:00.795 "code": -1, 00:24:00.795 "message": "Operation not permitted" 00:24:00.795 } 00:24:00.795 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.053 [2024-11-26 18:19:48.840746] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.053 [2024-11-26 18:19:48.840806] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:01.053 request: 00:24:01.053 { 00:24:01.053 "name": "TLSTEST", 00:24:01.053 "trtype": "tcp", 00:24:01.053 "traddr": "10.0.0.2", 00:24:01.053 "adrfam": "ipv4", 00:24:01.053 "trsvcid": "4420", 00:24:01.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.053 "prchk_reftag": false, 00:24:01.053 "prchk_guard": false, 00:24:01.053 "hdgst": false, 00:24:01.053 "ddgst": false, 00:24:01.053 "psk": "key0", 00:24:01.053 "allow_unrecognized_csi": false, 00:24:01.053 "method": "bdev_nvme_attach_controller", 00:24:01.053 "req_id": 1 00:24:01.053 } 00:24:01.053 Got JSON-RPC error response 00:24:01.053 response: 00:24:01.053 { 00:24:01.053 "code": -126, 00:24:01.053 "message": "Required key not available" 00:24:01.053 } 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 632969 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 632969 ']' 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 632969 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 632969 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 632969' 00:24:01.053 killing process with pid 632969 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 632969 00:24:01.053 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.053 00:24:01.053 Latency(us) 00:24:01.053 [2024-11-26T17:19:49.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.053 [2024-11-26T17:19:49.064Z] =================================================================================================================== 00:24:01.053 [2024-11-26T17:19:49.064Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:01.053 18:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 632969 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 631343 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 631343 ']' 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 631343 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 631343 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 631343' 00:24:01.311 killing process with pid 631343 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 631343 00:24:01.311 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 631343 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=633122 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 633122 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 633122 ']' 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.569 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.569 [2024-11-26 18:19:49.436098] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:01.569 [2024-11-26 18:19:49.436194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.569 [2024-11-26 18:19:49.510356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.569 [2024-11-26 18:19:49.565701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.569 [2024-11-26 18:19:49.565757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.569 [2024-11-26 18:19:49.565784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.569 [2024-11-26 18:19:49.565796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.569 [2024-11-26 18:19:49.565805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.569 [2024-11-26 18:19:49.566402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.JSIqjWZCxi 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.JSIqjWZCxi 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.JSIqjWZCxi 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JSIqjWZCxi 00:24:01.828 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:02.087 [2024-11-26 18:19:49.960081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.087 18:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:02.344 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:02.603 [2024-11-26 18:19:50.529634] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.603 [2024-11-26 18:19:50.529880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.603 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:02.860 malloc0 00:24:02.860 18:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:03.118 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:24:03.376 [2024-11-26 18:19:51.371471] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.JSIqjWZCxi': 0100666 00:24:03.376 [2024-11-26 18:19:51.371517] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:03.376 request: 00:24:03.376 { 00:24:03.376 "name": "key0", 00:24:03.376 "path": "/tmp/tmp.JSIqjWZCxi", 00:24:03.376 "method": "keyring_file_add_key", 00:24:03.376 "req_id": 1 00:24:03.376 } 00:24:03.376 Got JSON-RPC error response 00:24:03.376 response: 00:24:03.376 { 00:24:03.376 "code": -1, 00:24:03.376 "message": "Operation not permitted" 00:24:03.376 } 00:24:03.634 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:03.634 [2024-11-26 18:19:51.640267] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:03.634 [2024-11-26 18:19:51.640352] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:03.892 request: 00:24:03.892 { 00:24:03.892 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.892 "host": "nqn.2016-06.io.spdk:host1", 00:24:03.892 "psk": "key0", 00:24:03.892 "method": "nvmf_subsystem_add_host", 00:24:03.892 "req_id": 1 00:24:03.892 } 00:24:03.892 Got JSON-RPC error response 00:24:03.892 response: 00:24:03.892 { 00:24:03.892 "code": -32603, 00:24:03.892 "message": "Internal error" 00:24:03.892 } 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 633122 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 633122 ']' 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 633122 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633122 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633122' 00:24:03.892 killing process with pid 633122 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 633122 00:24:03.892 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 633122 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.JSIqjWZCxi 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=633535 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 633535 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 633535 ']' 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.150 18:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.150 [2024-11-26 18:19:51.984109] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:04.150 [2024-11-26 18:19:51.984182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.150 [2024-11-26 18:19:52.055928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.150 [2024-11-26 18:19:52.112773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.150 [2024-11-26 18:19:52.112818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.150 [2024-11-26 18:19:52.112848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.150 [2024-11-26 18:19:52.112860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.150 [2024-11-26 18:19:52.112881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.150 [2024-11-26 18:19:52.113492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.JSIqjWZCxi 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JSIqjWZCxi 00:24:04.408 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:04.666 [2024-11-26 18:19:52.492060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.666 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:04.924 18:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:05.181 [2024-11-26 18:19:53.129826] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:05.181 [2024-11-26 18:19:53.130086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.181 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:05.747 malloc0 00:24:05.747 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:05.747 18:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:24:06.005 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=633829 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 633829 /var/tmp/bdevperf.sock 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 633829 ']' 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.570 [2024-11-26 18:19:54.329244] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:06.570 [2024-11-26 18:19:54.329359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633829 ] 00:24:06.570 [2024-11-26 18:19:54.396048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.570 [2024-11-26 18:19:54.453311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:06.570 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:24:06.828 18:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.086 [2024-11-26 18:19:55.073441] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.344 TLSTESTn1 00:24:07.344 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:07.602 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:07.602 "subsystems": [ 00:24:07.602 { 00:24:07.602 "subsystem": "keyring", 00:24:07.602 "config": [ 00:24:07.602 { 00:24:07.602 "method": "keyring_file_add_key", 00:24:07.603 "params": { 00:24:07.603 "name": "key0", 00:24:07.603 "path": "/tmp/tmp.JSIqjWZCxi" 00:24:07.603 } 00:24:07.603 } 00:24:07.603 ] 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "subsystem": "iobuf", 00:24:07.603 "config": [ 00:24:07.603 { 00:24:07.603 "method": "iobuf_set_options", 00:24:07.603 "params": { 00:24:07.603 "small_pool_count": 8192, 00:24:07.603 "large_pool_count": 1024, 00:24:07.603 "small_bufsize": 8192, 00:24:07.603 "large_bufsize": 135168, 00:24:07.603 "enable_numa": false 00:24:07.603 } 00:24:07.603 } 00:24:07.603 ] 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "subsystem": "sock", 00:24:07.603 "config": [ 00:24:07.603 { 00:24:07.603 "method": "sock_set_default_impl", 00:24:07.603 "params": { 00:24:07.603 "impl_name": "posix" 00:24:07.603 } 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "method": "sock_impl_set_options", 00:24:07.603 "params": { 00:24:07.603 "impl_name": "ssl", 00:24:07.603 "recv_buf_size": 4096, 00:24:07.603 "send_buf_size": 4096, 00:24:07.603 "enable_recv_pipe": true, 00:24:07.603 "enable_quickack": false, 00:24:07.603 "enable_placement_id": 0, 00:24:07.603 "enable_zerocopy_send_server": true, 00:24:07.603 "enable_zerocopy_send_client": false, 00:24:07.603 "zerocopy_threshold": 0, 00:24:07.603 "tls_version": 0, 00:24:07.603 "enable_ktls": false 00:24:07.603 } 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "method": "sock_impl_set_options", 00:24:07.603 "params": { 00:24:07.603 "impl_name": "posix", 00:24:07.603 "recv_buf_size": 2097152, 00:24:07.603 "send_buf_size": 2097152, 00:24:07.603 "enable_recv_pipe": true, 00:24:07.603 "enable_quickack": false, 00:24:07.603 "enable_placement_id": 0, 00:24:07.603 "enable_zerocopy_send_server": true, 00:24:07.603 "enable_zerocopy_send_client": false, 00:24:07.603 "zerocopy_threshold": 0, 00:24:07.603 "tls_version": 0, 00:24:07.603 "enable_ktls": false 00:24:07.603 } 00:24:07.603 } 00:24:07.603 ] 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "subsystem": "vmd", 00:24:07.603 "config": [] 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "subsystem": "accel", 00:24:07.603 "config": [ 00:24:07.603 { 00:24:07.603 "method": "accel_set_options", 00:24:07.603 "params": { 00:24:07.603 "small_cache_size": 128, 00:24:07.603 "large_cache_size": 16, 00:24:07.603 "task_count": 2048, 00:24:07.603 "sequence_count": 2048, 00:24:07.603 "buf_count": 2048 00:24:07.603 } 00:24:07.603 } 00:24:07.603 ] 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "subsystem": "bdev", 00:24:07.603 "config": [ 00:24:07.603 { 00:24:07.603 "method": "bdev_set_options", 00:24:07.603 "params": { 00:24:07.603 "bdev_io_pool_size": 65535, 00:24:07.603 "bdev_io_cache_size": 256, 00:24:07.603 "bdev_auto_examine": true, 00:24:07.603 "iobuf_small_cache_size": 128, 00:24:07.603 "iobuf_large_cache_size": 16 00:24:07.603 } 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "method": "bdev_raid_set_options", 00:24:07.603 "params": { 00:24:07.603 "process_window_size_kb": 1024, 00:24:07.603 "process_max_bandwidth_mb_sec": 0 00:24:07.603 } 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "method": "bdev_iscsi_set_options", 00:24:07.603 "params": { 00:24:07.603 "timeout_sec": 30 00:24:07.603 } 00:24:07.603 }, 00:24:07.603 { 00:24:07.603 "method": "bdev_nvme_set_options", 00:24:07.604 "params": { 00:24:07.604 "action_on_timeout": "none", 00:24:07.604 "timeout_us": 0, 00:24:07.604 "timeout_admin_us": 0, 00:24:07.604 "keep_alive_timeout_ms": 10000, 00:24:07.604 "arbitration_burst": 0, 00:24:07.604 "low_priority_weight": 0, 00:24:07.604 "medium_priority_weight": 0, 00:24:07.604 "high_priority_weight": 0, 00:24:07.604 "nvme_adminq_poll_period_us": 10000, 00:24:07.604 "nvme_ioq_poll_period_us": 0, 00:24:07.604 "io_queue_requests": 0, 00:24:07.604 "delay_cmd_submit": true, 00:24:07.604 "transport_retry_count": 4, 00:24:07.604 "bdev_retry_count": 3, 00:24:07.604 "transport_ack_timeout": 0, 00:24:07.604 "ctrlr_loss_timeout_sec": 0, 00:24:07.604 "reconnect_delay_sec": 0, 00:24:07.604 "fast_io_fail_timeout_sec": 0, 00:24:07.604 "disable_auto_failback": false, 00:24:07.604 "generate_uuids": false, 00:24:07.604 "transport_tos": 0, 00:24:07.604 "nvme_error_stat": false, 00:24:07.604 "rdma_srq_size": 0, 00:24:07.604 "io_path_stat": false, 00:24:07.604 "allow_accel_sequence": false, 00:24:07.604 "rdma_max_cq_size": 0, 00:24:07.604 "rdma_cm_event_timeout_ms": 0, 00:24:07.604 "dhchap_digests": [ 00:24:07.604 "sha256", 00:24:07.604 "sha384", 00:24:07.604 "sha512" 00:24:07.604 ], 00:24:07.604 "dhchap_dhgroups": [ 00:24:07.604 "null", 00:24:07.604 "ffdhe2048", 00:24:07.604 "ffdhe3072", 00:24:07.604 "ffdhe4096", 00:24:07.604 "ffdhe6144", 00:24:07.604 "ffdhe8192" 00:24:07.604 ] 00:24:07.604 } 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "method": "bdev_nvme_set_hotplug", 00:24:07.604 "params": { 00:24:07.604 "period_us": 100000, 00:24:07.604 "enable": false 00:24:07.604 } 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "method": "bdev_malloc_create", 00:24:07.604 "params": { 00:24:07.604 "name": "malloc0", 00:24:07.604 "num_blocks": 8192, 00:24:07.604 "block_size": 4096, 00:24:07.604 "physical_block_size": 4096, 00:24:07.604 "uuid": "9155da82-268c-455c-8adf-2cf86b4db581", 00:24:07.604 "optimal_io_boundary": 0, 00:24:07.604 "md_size": 0, 00:24:07.604 "dif_type": 0, 00:24:07.604 "dif_is_head_of_md": false, 00:24:07.604 "dif_pi_format": 0 00:24:07.604 } 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "method": "bdev_wait_for_examine" 00:24:07.604 } 00:24:07.604 ] 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "subsystem": "nbd", 00:24:07.604 "config": [] 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "subsystem": "scheduler", 00:24:07.604 "config": [ 00:24:07.604 { 00:24:07.604 "method": "framework_set_scheduler", 00:24:07.604 "params": { 00:24:07.604 "name": "static" 00:24:07.604 } 00:24:07.604 } 00:24:07.604 ] 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "subsystem": "nvmf", 00:24:07.604 "config": [ 00:24:07.604 { 00:24:07.604 "method": "nvmf_set_config", 00:24:07.604 "params": { 00:24:07.604 "discovery_filter": "match_any", 00:24:07.604 "admin_cmd_passthru": { 00:24:07.604 "identify_ctrlr": false 00:24:07.604 }, 00:24:07.604 "dhchap_digests": [ 00:24:07.604 "sha256", 00:24:07.604 "sha384", 00:24:07.604 "sha512" 00:24:07.604 ], 00:24:07.604 "dhchap_dhgroups": [ 00:24:07.604 "null", 00:24:07.604 "ffdhe2048", 00:24:07.604 "ffdhe3072", 00:24:07.604 "ffdhe4096", 00:24:07.604 "ffdhe6144", 00:24:07.604 "ffdhe8192" 00:24:07.604 ] 00:24:07.604 } 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "method": "nvmf_set_max_subsystems", 00:24:07.604 "params": { 00:24:07.604 "max_subsystems": 1024 00:24:07.604 } 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "method": "nvmf_set_crdt", 00:24:07.604 "params": { 00:24:07.604 "crdt1": 0, 00:24:07.604 "crdt2": 0, 00:24:07.604 "crdt3": 0 00:24:07.604 } 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "method": "nvmf_create_transport", 00:24:07.604 "params": { 00:24:07.604 "trtype": "TCP", 00:24:07.604 "max_queue_depth": 128, 00:24:07.604 "max_io_qpairs_per_ctrlr": 127, 00:24:07.604 "in_capsule_data_size": 4096, 00:24:07.604 "max_io_size": 131072, 00:24:07.604 "io_unit_size": 131072, 00:24:07.604 "max_aq_depth": 128, 00:24:07.604 "num_shared_buffers": 511, 00:24:07.604 "buf_cache_size": 4294967295, 00:24:07.604 "dif_insert_or_strip": false, 00:24:07.604 "zcopy": false, 00:24:07.604 "c2h_success": false, 00:24:07.604 "sock_priority": 0, 00:24:07.604 "abort_timeout_sec": 1, 00:24:07.604 "ack_timeout": 0, 00:24:07.604 "data_wr_pool_size": 0 00:24:07.604 } 00:24:07.604 }, 00:24:07.604 { 00:24:07.604 "method": "nvmf_create_subsystem", 00:24:07.604 "params": { 00:24:07.604 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.604 "allow_any_host": false, 00:24:07.604 "serial_number": "SPDK00000000000001", 00:24:07.605 "model_number": "SPDK bdev Controller", 00:24:07.605 "max_namespaces": 10, 00:24:07.605 "min_cntlid": 1, 00:24:07.605 "max_cntlid": 65519, 00:24:07.605 "ana_reporting": false 00:24:07.605 } 00:24:07.605 }, 00:24:07.605 { 00:24:07.605 "method": "nvmf_subsystem_add_host", 00:24:07.605 "params": { 00:24:07.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.605 "host": "nqn.2016-06.io.spdk:host1", 00:24:07.605 "psk": "key0" 00:24:07.605 } 00:24:07.605 }, 00:24:07.605 { 00:24:07.605 "method": "nvmf_subsystem_add_ns", 00:24:07.605 "params": { 00:24:07.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.605 "namespace": { 00:24:07.605 "nsid": 1, 00:24:07.605 "bdev_name": "malloc0", 00:24:07.605 "nguid": "9155DA82268C455C8ADF2CF86B4DB581", 00:24:07.605 "uuid": "9155da82-268c-455c-8adf-2cf86b4db581", 00:24:07.605 "no_auto_visible": false 00:24:07.605 } 00:24:07.605 } 00:24:07.605 }, 00:24:07.605 { 00:24:07.605 "method": "nvmf_subsystem_add_listener", 00:24:07.605 "params": { 00:24:07.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.605 "listen_address": { 00:24:07.605 "trtype": "TCP", 00:24:07.605 "adrfam": "IPv4", 00:24:07.605 "traddr": "10.0.0.2", 00:24:07.605 "trsvcid": "4420" 00:24:07.605 }, 00:24:07.605 "secure_channel": true 00:24:07.605 } 00:24:07.605 } 00:24:07.605 ] 00:24:07.605 } 00:24:07.605 ] 00:24:07.605 }' 00:24:07.605 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:07.863 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:07.863 "subsystems": [ 00:24:07.863 { 00:24:07.863 "subsystem": "keyring", 00:24:07.863 "config": [ 00:24:07.863 { 00:24:07.863 "method": "keyring_file_add_key", 00:24:07.863 "params": { 00:24:07.863 "name": "key0", 00:24:07.863 "path": "/tmp/tmp.JSIqjWZCxi" 00:24:07.863 } 00:24:07.863 } 00:24:07.863 ] 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "subsystem": "iobuf", 00:24:07.863 "config": [ 00:24:07.863 { 00:24:07.863 "method": "iobuf_set_options", 00:24:07.863 "params": { 00:24:07.863 "small_pool_count": 8192, 00:24:07.863 "large_pool_count": 1024, 00:24:07.863 "small_bufsize": 8192, 00:24:07.863 "large_bufsize": 135168, 00:24:07.863 "enable_numa": false 00:24:07.863 } 00:24:07.863 } 00:24:07.863 ] 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "subsystem": "sock", 00:24:07.863 "config": [ 00:24:07.863 { 00:24:07.863 "method": "sock_set_default_impl", 00:24:07.863 "params": { 00:24:07.863 "impl_name": "posix" 00:24:07.863 } 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "method": "sock_impl_set_options", 00:24:07.863 "params": { 00:24:07.863 "impl_name": "ssl", 00:24:07.863 "recv_buf_size": 4096, 00:24:07.863 "send_buf_size": 4096, 00:24:07.863 "enable_recv_pipe": true, 00:24:07.863 "enable_quickack": false, 00:24:07.863 "enable_placement_id": 0, 00:24:07.863 "enable_zerocopy_send_server": true, 00:24:07.863 "enable_zerocopy_send_client": false, 00:24:07.863 "zerocopy_threshold": 0, 00:24:07.863 "tls_version": 0, 00:24:07.863 "enable_ktls": false 00:24:07.863 } 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "method": "sock_impl_set_options", 00:24:07.863 "params": { 00:24:07.863 "impl_name": "posix", 00:24:07.863 "recv_buf_size": 2097152, 00:24:07.863 "send_buf_size": 2097152, 00:24:07.863 "enable_recv_pipe": true, 00:24:07.863 "enable_quickack": false, 00:24:07.863 "enable_placement_id": 0, 00:24:07.863 "enable_zerocopy_send_server": true, 00:24:07.863 "enable_zerocopy_send_client": false, 00:24:07.863 "zerocopy_threshold": 0, 00:24:07.863 "tls_version": 0, 00:24:07.863 "enable_ktls": false 00:24:07.863 } 00:24:07.863 } 00:24:07.863 ] 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "subsystem": "vmd", 00:24:07.863 "config": [] 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "subsystem": "accel", 00:24:07.863 "config": [ 00:24:07.863 { 00:24:07.863 "method": "accel_set_options", 00:24:07.863 "params": { 00:24:07.863 "small_cache_size": 128, 00:24:07.863 "large_cache_size": 16, 00:24:07.863 "task_count": 2048, 00:24:07.863 "sequence_count": 2048, 00:24:07.863 "buf_count": 2048 00:24:07.863 } 00:24:07.863 } 00:24:07.863 ] 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "subsystem": "bdev", 00:24:07.863 "config": [ 00:24:07.863 { 00:24:07.863 "method": "bdev_set_options", 00:24:07.863 "params": { 00:24:07.863 "bdev_io_pool_size": 65535, 00:24:07.863 "bdev_io_cache_size": 256, 00:24:07.863 "bdev_auto_examine": true, 00:24:07.863 "iobuf_small_cache_size": 128, 00:24:07.863 "iobuf_large_cache_size": 16 00:24:07.863 } 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "method": "bdev_raid_set_options", 00:24:07.863 "params": { 00:24:07.863 "process_window_size_kb": 1024, 00:24:07.863 "process_max_bandwidth_mb_sec": 0 00:24:07.863 } 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "method": "bdev_iscsi_set_options", 00:24:07.863 "params": { 00:24:07.863 "timeout_sec": 30 00:24:07.863 } 00:24:07.863 }, 00:24:07.863 { 00:24:07.863 "method": "bdev_nvme_set_options", 00:24:07.863 "params": { 00:24:07.863 "action_on_timeout": "none", 00:24:07.863 "timeout_us": 0, 00:24:07.863 "timeout_admin_us": 0, 00:24:07.863 "keep_alive_timeout_ms": 10000, 00:24:07.863 "arbitration_burst": 0, 00:24:07.863 "low_priority_weight": 0, 00:24:07.863 "medium_priority_weight": 0, 00:24:07.864 "high_priority_weight": 0, 00:24:07.864 "nvme_adminq_poll_period_us": 10000, 00:24:07.864 "nvme_ioq_poll_period_us": 0, 00:24:07.864 "io_queue_requests": 512, 00:24:07.864 "delay_cmd_submit": true, 00:24:07.864 "transport_retry_count": 4, 00:24:07.864 "bdev_retry_count": 3, 00:24:07.864 "transport_ack_timeout": 0, 00:24:07.864 "ctrlr_loss_timeout_sec": 0, 00:24:07.864 "reconnect_delay_sec": 0, 00:24:07.864 "fast_io_fail_timeout_sec": 0, 00:24:07.864 "disable_auto_failback": false, 00:24:07.864 "generate_uuids": false, 00:24:07.864 "transport_tos": 0, 00:24:07.864 "nvme_error_stat": false, 00:24:07.864 "rdma_srq_size": 0, 00:24:07.864 "io_path_stat": false, 00:24:07.864 "allow_accel_sequence": false, 00:24:07.864 "rdma_max_cq_size": 0, 00:24:07.864 "rdma_cm_event_timeout_ms": 0, 00:24:07.864 "dhchap_digests": [ 00:24:07.864 "sha256", 00:24:07.864 "sha384", 00:24:07.864 "sha512" 00:24:07.864 ], 00:24:07.864 "dhchap_dhgroups": [ 00:24:07.864 "null", 00:24:07.864 "ffdhe2048", 00:24:07.864 "ffdhe3072", 00:24:07.864 "ffdhe4096", 00:24:07.864 "ffdhe6144", 00:24:07.864 "ffdhe8192" 00:24:07.864 ] 00:24:07.864 } 00:24:07.864 }, 00:24:07.864 { 00:24:07.864 "method": "bdev_nvme_attach_controller", 00:24:07.864 "params": { 00:24:07.864 "name": "TLSTEST", 00:24:07.864 "trtype": "TCP", 00:24:07.864 "adrfam": "IPv4", 00:24:07.864 "traddr": "10.0.0.2", 00:24:07.864 "trsvcid": "4420", 00:24:07.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.864 "prchk_reftag": false, 00:24:07.864 "prchk_guard": false, 00:24:07.864 "ctrlr_loss_timeout_sec": 0, 00:24:07.864 "reconnect_delay_sec": 0, 00:24:07.864 "fast_io_fail_timeout_sec": 0, 00:24:07.864 "psk": "key0", 00:24:07.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.864 "hdgst": false, 00:24:07.864 "ddgst": false, 00:24:07.864 "multipath": "multipath" 00:24:07.864 } 00:24:07.864 }, 00:24:07.864 { 00:24:07.864 "method": "bdev_nvme_set_hotplug", 00:24:07.864 "params": { 00:24:07.864 "period_us": 100000, 00:24:07.864 "enable": false 00:24:07.864 } 00:24:07.864 }, 00:24:07.864 { 00:24:07.864 "method": "bdev_wait_for_examine" 00:24:07.864 } 00:24:07.864 ] 00:24:07.864 }, 00:24:07.864 { 00:24:07.864 "subsystem": "nbd", 00:24:07.864 "config": [] 00:24:07.864 } 00:24:07.864 ] 00:24:07.864 }' 00:24:07.864 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 633829 00:24:07.864 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 633829 ']' 00:24:07.864 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 633829 00:24:07.864 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:07.864 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.864 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633829 00:24:08.122 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:08.122 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:08.122 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633829' 00:24:08.122 killing process with pid 633829 00:24:08.122 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 633829 00:24:08.122 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.122 00:24:08.122 Latency(us) 00:24:08.122 [2024-11-26T17:19:56.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.122 [2024-11-26T17:19:56.133Z] =================================================================================================================== 00:24:08.122 [2024-11-26T17:19:56.133Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:08.122 18:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 633829 00:24:08.122 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 633535 00:24:08.122 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 633535 ']' 00:24:08.122 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 633535 00:24:08.122 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:08.122 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.122 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 633535 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 633535' 00:24:08.380 killing process with pid 633535 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 633535 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 633535 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.380 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:08.380 "subsystems": [ 00:24:08.380 { 00:24:08.380 "subsystem": "keyring", 00:24:08.380 "config": [ 00:24:08.380 { 00:24:08.380 "method": "keyring_file_add_key", 00:24:08.380 "params": { 00:24:08.380 "name": "key0", 00:24:08.380 "path": "/tmp/tmp.JSIqjWZCxi" 00:24:08.380 } 00:24:08.380 } 00:24:08.380 ] 00:24:08.380 }, 00:24:08.380 { 00:24:08.380 "subsystem": "iobuf", 00:24:08.380 "config": [ 00:24:08.381 { 00:24:08.381 "method": "iobuf_set_options", 00:24:08.381 "params": { 00:24:08.381 "small_pool_count": 8192, 00:24:08.381 "large_pool_count": 1024, 00:24:08.381 "small_bufsize": 8192, 00:24:08.381 "large_bufsize": 135168, 00:24:08.381 "enable_numa": false 00:24:08.381 } 00:24:08.381 } 00:24:08.381 ] 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "subsystem": "sock", 00:24:08.381 "config": [ 00:24:08.381 { 00:24:08.381 "method": "sock_set_default_impl", 00:24:08.381 "params": { 00:24:08.381 "impl_name": "posix" 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "sock_impl_set_options", 00:24:08.381 "params": { 00:24:08.381 "impl_name": "ssl", 00:24:08.381 "recv_buf_size": 4096, 00:24:08.381 "send_buf_size": 4096, 00:24:08.381 "enable_recv_pipe": true, 00:24:08.381 "enable_quickack": false, 00:24:08.381 "enable_placement_id": 0, 00:24:08.381 "enable_zerocopy_send_server": true, 00:24:08.381 "enable_zerocopy_send_client": false, 00:24:08.381 "zerocopy_threshold": 0, 00:24:08.381 "tls_version": 0, 00:24:08.381 "enable_ktls": false 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "sock_impl_set_options", 00:24:08.381 "params": { 00:24:08.381 "impl_name": "posix", 00:24:08.381 "recv_buf_size": 2097152, 00:24:08.381 "send_buf_size": 2097152, 00:24:08.381 "enable_recv_pipe": true, 00:24:08.381 "enable_quickack": false, 00:24:08.381 "enable_placement_id": 0, 00:24:08.381 "enable_zerocopy_send_server": true, 00:24:08.381 "enable_zerocopy_send_client": false, 00:24:08.381 "zerocopy_threshold": 0, 00:24:08.381 "tls_version": 0, 00:24:08.381 "enable_ktls": false 00:24:08.381 } 00:24:08.381 } 00:24:08.381 ] 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "subsystem": "vmd", 00:24:08.381 "config": [] 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "subsystem": "accel", 00:24:08.381 "config": [ 00:24:08.381 { 00:24:08.381 "method": "accel_set_options", 00:24:08.381 "params": { 00:24:08.381 "small_cache_size": 128, 00:24:08.381 "large_cache_size": 16, 00:24:08.381 "task_count": 2048, 00:24:08.381 "sequence_count": 2048, 00:24:08.381 "buf_count": 2048 00:24:08.381 } 00:24:08.381 } 00:24:08.381 ] 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "subsystem": "bdev", 00:24:08.381 "config": [ 00:24:08.381 { 00:24:08.381 "method": "bdev_set_options", 00:24:08.381 "params": { 00:24:08.381 "bdev_io_pool_size": 65535, 00:24:08.381 "bdev_io_cache_size": 256, 00:24:08.381 "bdev_auto_examine": true, 00:24:08.381 "iobuf_small_cache_size": 128, 00:24:08.381 "iobuf_large_cache_size": 16 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_raid_set_options", 00:24:08.381 "params": { 00:24:08.381 "process_window_size_kb": 1024, 00:24:08.381 "process_max_bandwidth_mb_sec": 0 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_iscsi_set_options", 00:24:08.381 "params": { 00:24:08.381 "timeout_sec": 30 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_nvme_set_options", 00:24:08.381 "params": { 00:24:08.381 "action_on_timeout": "none", 00:24:08.381 "timeout_us": 0, 00:24:08.381 "timeout_admin_us": 0, 00:24:08.381 "keep_alive_timeout_ms": 10000, 00:24:08.381 "arbitration_burst": 0, 00:24:08.381 "low_priority_weight": 0, 00:24:08.381 "medium_priority_weight": 0, 00:24:08.381 "high_priority_weight": 0, 00:24:08.381 "nvme_adminq_poll_period_us": 10000, 00:24:08.381 "nvme_ioq_poll_period_us": 0, 00:24:08.381 "io_queue_requests": 0, 00:24:08.381 "delay_cmd_submit": true, 00:24:08.381 "transport_retry_count": 4, 00:24:08.381 "bdev_retry_count": 3, 00:24:08.381 "transport_ack_timeout": 0, 00:24:08.381 "ctrlr_loss_timeout_sec": 0, 00:24:08.381 "reconnect_delay_sec": 0, 00:24:08.381 "fast_io_fail_timeout_sec": 0, 00:24:08.381 "disable_auto_failback": false, 00:24:08.381 "generate_uuids": false, 00:24:08.381 "transport_tos": 0, 00:24:08.381 "nvme_error_stat": false, 00:24:08.381 "rdma_srq_size": 0, 00:24:08.381 "io_path_stat": false, 00:24:08.381 "allow_accel_sequence": false, 00:24:08.381 "rdma_max_cq_size": 0, 00:24:08.381 "rdma_cm_event_timeout_ms": 0, 00:24:08.381 "dhchap_digests": [ 00:24:08.381 "sha256", 00:24:08.381 "sha384", 00:24:08.381 "sha512" 00:24:08.381 ], 00:24:08.381 "dhchap_dhgroups": [ 00:24:08.381 "null", 00:24:08.381 "ffdhe2048", 00:24:08.381 "ffdhe3072", 00:24:08.381 "ffdhe4096", 00:24:08.381 "ffdhe6144", 00:24:08.381 "ffdhe8192" 00:24:08.381 ] 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_nvme_set_hotplug", 00:24:08.381 "params": { 00:24:08.381 "period_us": 100000, 00:24:08.381 "enable": false 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_malloc_create", 00:24:08.381 "params": { 00:24:08.381 "name": "malloc0", 00:24:08.381 "num_blocks": 8192, 00:24:08.381 "block_size": 4096, 00:24:08.381 "physical_block_size": 4096, 00:24:08.381 "uuid": "9155da82-268c-455c-8adf-2cf86b4db581", 00:24:08.381 "optimal_io_boundary": 0, 00:24:08.381 "md_size": 0, 00:24:08.381 "dif_type": 0, 00:24:08.381 "dif_is_head_of_md": false, 00:24:08.381 "dif_pi_format": 0 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "bdev_wait_for_examine" 00:24:08.381 } 00:24:08.381 ] 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "subsystem": "nbd", 00:24:08.381 "config": [] 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "subsystem": "scheduler", 00:24:08.381 "config": [ 00:24:08.381 { 00:24:08.381 "method": "framework_set_scheduler", 00:24:08.381 "params": { 00:24:08.381 "name": "static" 00:24:08.381 } 00:24:08.381 } 00:24:08.381 ] 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "subsystem": "nvmf", 00:24:08.381 "config": [ 00:24:08.381 { 00:24:08.381 "method": "nvmf_set_config", 00:24:08.381 "params": { 00:24:08.381 "discovery_filter": "match_any", 00:24:08.381 "admin_cmd_passthru": { 00:24:08.381 "identify_ctrlr": false 00:24:08.381 }, 00:24:08.381 "dhchap_digests": [ 00:24:08.381 "sha256", 00:24:08.381 "sha384", 00:24:08.381 "sha512" 00:24:08.381 ], 00:24:08.381 "dhchap_dhgroups": [ 00:24:08.381 "null", 00:24:08.381 "ffdhe2048", 00:24:08.381 "ffdhe3072", 00:24:08.381 "ffdhe4096", 00:24:08.381 "ffdhe6144", 00:24:08.381 "ffdhe8192" 00:24:08.381 ] 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "nvmf_set_max_subsystems", 00:24:08.381 "params": { 00:24:08.381 "max_subsystems": 1024 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "nvmf_set_crdt", 00:24:08.381 "params": { 00:24:08.381 "crdt1": 0, 00:24:08.381 "crdt2": 0, 00:24:08.381 "crdt3": 0 00:24:08.381 } 00:24:08.381 }, 00:24:08.381 { 00:24:08.381 "method": "nvmf_create_transport", 00:24:08.381 "params": { 00:24:08.381 "trtype": "TCP", 00:24:08.381 "max_queue_depth": 128, 00:24:08.381 "max_io_qpairs_per_ctrlr": 127, 00:24:08.381 "in_capsule_data_size": 4096, 00:24:08.381 "max_io_size": 131072, 00:24:08.381 "io_unit_size": 131072, 00:24:08.381 "max_aq_depth": 128, 00:24:08.381 "num_shared_buffers": 511, 00:24:08.381 "buf_cache_size": 4294967295, 00:24:08.381 "dif_insert_or_strip": false, 00:24:08.381 "zcopy": false, 00:24:08.381 "c2h_success": false, 00:24:08.381 "sock_priority": 0, 00:24:08.381 "abort_timeout_sec": 1, 00:24:08.381 "ack_timeout": 0, 00:24:08.382 "data_wr_pool_size": 0 00:24:08.382 } 00:24:08.382 }, 00:24:08.382 { 00:24:08.382 "method": "nvmf_create_subsystem", 00:24:08.382 "params": { 00:24:08.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.382 "allow_any_host": false, 00:24:08.382 "serial_number": "SPDK00000000000001", 00:24:08.382 "model_number": "SPDK bdev Controller", 00:24:08.382 "max_namespaces": 10, 00:24:08.382 "min_cntlid": 1, 00:24:08.382 "max_cntlid": 65519, 00:24:08.382 "ana_reporting": false 00:24:08.382 } 00:24:08.382 }, 00:24:08.382 { 00:24:08.382 "method": "nvmf_subsystem_add_host", 00:24:08.382 "params": { 00:24:08.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.382 "host": "nqn.2016-06.io.spdk:host1", 00:24:08.382 "psk": "key0" 00:24:08.382 } 00:24:08.382 }, 00:24:08.382 { 00:24:08.382 "method": "nvmf_subsystem_add_ns", 00:24:08.382 "params": { 00:24:08.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.382 "namespace": { 00:24:08.382 "nsid": 1, 00:24:08.382 "bdev_name": "malloc0", 00:24:08.382 "nguid": "9155DA82268C455C8ADF2CF86B4DB581", 00:24:08.382 "uuid": "9155da82-268c-455c-8adf-2cf86b4db581", 00:24:08.382 "no_auto_visible": false 00:24:08.382 } 00:24:08.382 } 00:24:08.382 }, 00:24:08.382 { 00:24:08.382 "method": "nvmf_subsystem_add_listener", 00:24:08.382 "params": { 00:24:08.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.382 "listen_address": { 00:24:08.382 "trtype": "TCP", 00:24:08.382 "adrfam": "IPv4", 00:24:08.382 "traddr": "10.0.0.2", 00:24:08.382 "trsvcid": "4420" 00:24:08.382 }, 00:24:08.382 "secure_channel": true 00:24:08.382 } 00:24:08.382 } 00:24:08.382 ] 00:24:08.382 } 00:24:08.382 ] 00:24:08.382 }' 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=634106 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 634106 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 634106 ']' 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.382 18:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.640 [2024-11-26 18:19:56.432196] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:08.640 [2024-11-26 18:19:56.432300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.640 [2024-11-26 18:19:56.503830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.640 [2024-11-26 18:19:56.557236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.640 [2024-11-26 18:19:56.557296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.640 [2024-11-26 18:19:56.557332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.640 [2024-11-26 18:19:56.557343] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.640 [2024-11-26 18:19:56.557352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.640 [2024-11-26 18:19:56.558021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.898 [2024-11-26 18:19:56.809624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.898 [2024-11-26 18:19:56.841650] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.898 [2024-11-26 18:19:56.841906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.463 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.463 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:09.463 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.463 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.463 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=634258 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 634258 /var/tmp/bdevperf.sock 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 634258 ']' 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.722 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:09.722 "subsystems": [ 00:24:09.722 { 00:24:09.722 "subsystem": "keyring", 00:24:09.722 "config": [ 00:24:09.722 { 00:24:09.722 "method": "keyring_file_add_key", 00:24:09.722 "params": { 00:24:09.722 "name": "key0", 00:24:09.722 "path": "/tmp/tmp.JSIqjWZCxi" 00:24:09.722 } 00:24:09.722 } 00:24:09.722 ] 00:24:09.722 }, 00:24:09.722 { 00:24:09.722 "subsystem": "iobuf", 00:24:09.722 "config": [ 00:24:09.722 { 00:24:09.722 "method": "iobuf_set_options", 00:24:09.722 "params": { 00:24:09.722 "small_pool_count": 8192, 00:24:09.722 "large_pool_count": 1024, 00:24:09.722 "small_bufsize": 8192, 00:24:09.722 "large_bufsize": 135168, 00:24:09.722 "enable_numa": false 00:24:09.722 } 00:24:09.722 } 00:24:09.722 ] 00:24:09.722 }, 00:24:09.722 { 00:24:09.722 "subsystem": "sock", 00:24:09.722 "config": [ 00:24:09.722 { 00:24:09.722 "method": "sock_set_default_impl", 00:24:09.722 "params": { 00:24:09.722 "impl_name": "posix" 00:24:09.722 } 00:24:09.722 }, 00:24:09.722 { 00:24:09.722 "method": "sock_impl_set_options", 00:24:09.722 "params": { 00:24:09.722 "impl_name": "ssl", 00:24:09.722 "recv_buf_size": 4096, 00:24:09.722 "send_buf_size": 4096, 00:24:09.722 "enable_recv_pipe": true, 00:24:09.722 "enable_quickack": false, 00:24:09.722 "enable_placement_id": 0, 00:24:09.722 "enable_zerocopy_send_server": true, 00:24:09.722 "enable_zerocopy_send_client": false, 00:24:09.722 "zerocopy_threshold": 0, 00:24:09.722 "tls_version": 0, 00:24:09.722 "enable_ktls": false 00:24:09.722 } 00:24:09.722 }, 00:24:09.722 { 00:24:09.722 "method": "sock_impl_set_options", 00:24:09.722 "params": { 00:24:09.722 "impl_name": "posix", 00:24:09.722 "recv_buf_size": 2097152, 00:24:09.722 "send_buf_size": 2097152, 00:24:09.722 "enable_recv_pipe": true, 00:24:09.722 "enable_quickack": false, 00:24:09.722 "enable_placement_id": 0, 00:24:09.722 "enable_zerocopy_send_server": true, 00:24:09.722 "enable_zerocopy_send_client": false, 00:24:09.722 "zerocopy_threshold": 0, 00:24:09.722 "tls_version": 0, 00:24:09.722 "enable_ktls": false 00:24:09.722 } 00:24:09.722 } 00:24:09.722 ] 00:24:09.722 }, 00:24:09.722 { 00:24:09.722 "subsystem": "vmd", 00:24:09.722 "config": [] 00:24:09.722 }, 00:24:09.722 { 00:24:09.722 "subsystem": "accel", 00:24:09.722 "config": [ 00:24:09.722 { 00:24:09.722 "method": "accel_set_options", 00:24:09.722 "params": { 00:24:09.722 "small_cache_size": 128, 00:24:09.722 "large_cache_size": 16, 00:24:09.722 "task_count": 2048, 00:24:09.722 "sequence_count": 2048, 00:24:09.722 "buf_count": 2048 00:24:09.722 } 00:24:09.722 } 00:24:09.722 ] 00:24:09.722 }, 00:24:09.722 { 00:24:09.722 "subsystem": "bdev", 00:24:09.722 "config": [ 00:24:09.722 { 00:24:09.722 "method": "bdev_set_options", 00:24:09.722 "params": { 00:24:09.722 "bdev_io_pool_size": 65535, 00:24:09.722 "bdev_io_cache_size": 256, 00:24:09.722 "bdev_auto_examine": true, 00:24:09.722 "iobuf_small_cache_size": 128, 00:24:09.722 "iobuf_large_cache_size": 16 00:24:09.722 } 00:24:09.722 }, 00:24:09.722 { 00:24:09.722 "method": "bdev_raid_set_options", 00:24:09.722 "params": { 00:24:09.722 "process_window_size_kb": 1024, 00:24:09.722 "process_max_bandwidth_mb_sec": 0 00:24:09.722 } 00:24:09.722 }, 00:24:09.722 { 00:24:09.723 "method": "bdev_iscsi_set_options", 00:24:09.723 "params": { 00:24:09.723 "timeout_sec": 30 00:24:09.723 } 00:24:09.723 }, 00:24:09.723 { 00:24:09.723 "method": "bdev_nvme_set_options", 00:24:09.723 "params": { 00:24:09.723 "action_on_timeout": "none", 00:24:09.723 "timeout_us": 0, 00:24:09.723 "timeout_admin_us": 0, 00:24:09.723 "keep_alive_timeout_ms": 10000, 00:24:09.723 "arbitration_burst": 0, 00:24:09.723 "low_priority_weight": 0, 00:24:09.723 "medium_priority_weight": 0, 00:24:09.723 "high_priority_weight": 0, 00:24:09.723 "nvme_adminq_poll_period_us": 10000, 00:24:09.723 "nvme_ioq_poll_period_us": 0, 00:24:09.723 "io_queue_requests": 512, 00:24:09.723 "delay_cmd_submit": true, 00:24:09.723 "transport_retry_count": 4, 00:24:09.723 "bdev_retry_count": 3, 00:24:09.723 "transport_ack_timeout": 0, 00:24:09.723 "ctrlr_loss_timeout_sec": 0, 00:24:09.723 "reconnect_delay_sec": 0, 00:24:09.723 "fast_io_fail_timeout_sec": 0, 00:24:09.723 "disable_auto_failback": false, 00:24:09.723 "generate_uuids": false, 00:24:09.723 "transport_tos": 0, 00:24:09.723 "nvme_error_stat": false, 00:24:09.723 "rdma_srq_size": 0, 00:24:09.723 "io_path_stat": false, 00:24:09.723 "allow_accel_sequence": false, 00:24:09.723 "rdma_max_cq_size": 0, 00:24:09.723 "rdma_cm_event_timeout_ms": 0, 00:24:09.723 "dhchap_digests": [ 00:24:09.723 "sha256", 00:24:09.723 "sha384", 00:24:09.723 "sha512" 00:24:09.723 ], 00:24:09.723 "dhchap_dhgroups": [ 00:24:09.723 "null", 00:24:09.723 "ffdhe2048", 00:24:09.723 "ffdhe3072", 00:24:09.723 "ffdhe4096", 00:24:09.723 "ffdhe6144", 00:24:09.723 "ffdhe8192" 00:24:09.723 ] 00:24:09.723 } 00:24:09.723 }, 00:24:09.723 { 00:24:09.723 "method": "bdev_nvme_attach_controller", 00:24:09.723 "params": { 00:24:09.723 "name": "TLSTEST", 00:24:09.723 "trtype": "TCP", 00:24:09.723 "adrfam": "IPv4", 00:24:09.723 "traddr": "10.0.0.2", 00:24:09.723 "trsvcid": "4420", 00:24:09.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.723 "prchk_reftag": false, 00:24:09.723 "prchk_guard": false, 00:24:09.723 "ctrlr_loss_timeout_sec": 0, 00:24:09.723 "reconnect_delay_sec": 0, 00:24:09.723 "fast_io_fail_timeout_sec": 0, 00:24:09.723 "psk": "key0", 00:24:09.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:09.723 "hdgst": false, 00:24:09.723 "ddgst": false, 00:24:09.723 "multipath": "multipath" 00:24:09.723 } 00:24:09.723 }, 00:24:09.723 { 00:24:09.723 "method": "bdev_nvme_set_hotplug", 00:24:09.723 "params": { 00:24:09.723 "period_us": 100000, 00:24:09.723 "enable": false 00:24:09.723 } 00:24:09.723 }, 00:24:09.723 { 00:24:09.723 "method": "bdev_wait_for_examine" 00:24:09.723 } 00:24:09.723 ] 00:24:09.723 }, 00:24:09.723 { 00:24:09.723 "subsystem": "nbd", 00:24:09.723 "config": [] 00:24:09.723 } 00:24:09.723 ] 00:24:09.723 }' 00:24:09.723 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.723 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.723 [2024-11-26 18:19:57.540267] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:09.723 [2024-11-26 18:19:57.540360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634258 ] 00:24:09.723 [2024-11-26 18:19:57.606114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.723 [2024-11-26 18:19:57.663629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.981 [2024-11-26 18:19:57.845798] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:09.981 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.981 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:09.981 18:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:10.239 Running I/O for 10 seconds... 00:24:12.102 3464.00 IOPS, 13.53 MiB/s [2024-11-26T17:20:01.484Z] 3488.50 IOPS, 13.63 MiB/s [2024-11-26T17:20:02.423Z] 3521.67 IOPS, 13.76 MiB/s [2024-11-26T17:20:03.407Z] 3480.75 IOPS, 13.60 MiB/s [2024-11-26T17:20:04.339Z] 3495.20 IOPS, 13.65 MiB/s [2024-11-26T17:20:05.271Z] 3504.33 IOPS, 13.69 MiB/s [2024-11-26T17:20:06.204Z] 3509.86 IOPS, 13.71 MiB/s [2024-11-26T17:20:07.135Z] 3507.50 IOPS, 13.70 MiB/s [2024-11-26T17:20:08.507Z] 3511.11 IOPS, 13.72 MiB/s [2024-11-26T17:20:08.507Z] 3508.20 IOPS, 13.70 MiB/s 00:24:20.496 Latency(us) 00:24:20.496 [2024-11-26T17:20:08.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.496 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.496 Verification LBA range: start 0x0 length 0x2000 00:24:20.496 TLSTESTn1 : 10.02 3513.14 13.72 0.00 0.00 36371.42 9320.68 31845.64 00:24:20.496 [2024-11-26T17:20:08.507Z] =================================================================================================================== 00:24:20.496 [2024-11-26T17:20:08.507Z] Total : 3513.14 13.72 0.00 0.00 36371.42 9320.68 31845.64 00:24:20.496 { 00:24:20.496 "results": [ 00:24:20.496 { 00:24:20.496 "job": "TLSTESTn1", 00:24:20.496 "core_mask": "0x4", 00:24:20.496 "workload": "verify", 00:24:20.496 "status": "finished", 00:24:20.496 "verify_range": { 00:24:20.496 "start": 0, 00:24:20.496 "length": 8192 00:24:20.496 }, 00:24:20.496 "queue_depth": 128, 00:24:20.496 "io_size": 4096, 00:24:20.496 "runtime": 10.02208, 00:24:20.496 "iops": 3513.1429802994985, 00:24:20.497 "mibps": 13.723214766794916, 00:24:20.497 "io_failed": 0, 00:24:20.497 "io_timeout": 0, 00:24:20.497 "avg_latency_us": 36371.41937959886, 00:24:20.497 "min_latency_us": 9320.675555555556, 00:24:20.497 "max_latency_us": 31845.64148148148 00:24:20.497 } 00:24:20.497 ], 00:24:20.497 "core_count": 1 00:24:20.497 } 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 634258 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 634258 ']' 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 634258 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634258 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634258' 00:24:20.497 killing process with pid 634258 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 634258 00:24:20.497 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.497 00:24:20.497 Latency(us) 00:24:20.497 [2024-11-26T17:20:08.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.497 [2024-11-26T17:20:08.508Z] =================================================================================================================== 00:24:20.497 [2024-11-26T17:20:08.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 634258 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 634106 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 634106 ']' 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 634106 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 634106 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 634106' 00:24:20.497 killing process with pid 634106 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 634106 00:24:20.497 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 634106 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=635469 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 635469 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 635469 ']' 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.755 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.755 [2024-11-26 18:20:08.694197] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:20.755 [2024-11-26 18:20:08.694278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.013 [2024-11-26 18:20:08.765929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.013 [2024-11-26 18:20:08.824136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.013 [2024-11-26 18:20:08.824190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.013 [2024-11-26 18:20:08.824219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.013 [2024-11-26 18:20:08.824230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.013 [2024-11-26 18:20:08.824239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.013 [2024-11-26 18:20:08.824872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.JSIqjWZCxi 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.JSIqjWZCxi 00:24:21.013 18:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:21.271 [2024-11-26 18:20:09.254118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.271 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:21.836 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:21.836 [2024-11-26 18:20:09.823682] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.836 [2024-11-26 18:20:09.823907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.836 18:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:22.402 malloc0 00:24:22.402 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:22.659 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:24:22.916 18:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=635759 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 635759 /var/tmp/bdevperf.sock 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 635759 ']' 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:23.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:23.174 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.174 [2024-11-26 18:20:11.085966] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:23.174 [2024-11-26 18:20:11.086060] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635759 ] 00:24:23.174 [2024-11-26 18:20:11.155875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.432 [2024-11-26 18:20:11.214657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.432 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.432 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.432 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:24:23.690 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:23.948 [2024-11-26 18:20:11.846939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:23.948 nvme0n1 00:24:23.948 18:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.205 Running I/O for 1 seconds... 00:24:25.138 3495.00 IOPS, 13.65 MiB/s 00:24:25.138 Latency(us) 00:24:25.138 [2024-11-26T17:20:13.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.138 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:25.138 Verification LBA range: start 0x0 length 0x2000 00:24:25.138 nvme0n1 : 1.02 3540.77 13.83 0.00 0.00 35786.44 7621.59 30292.20 00:24:25.138 [2024-11-26T17:20:13.149Z] =================================================================================================================== 00:24:25.138 [2024-11-26T17:20:13.149Z] Total : 3540.77 13.83 0.00 0.00 35786.44 7621.59 30292.20 00:24:25.138 { 00:24:25.138 "results": [ 00:24:25.138 { 00:24:25.138 "job": "nvme0n1", 00:24:25.138 "core_mask": "0x2", 00:24:25.138 "workload": "verify", 00:24:25.138 "status": "finished", 00:24:25.138 "verify_range": { 00:24:25.138 "start": 0, 00:24:25.138 "length": 8192 00:24:25.138 }, 00:24:25.138 "queue_depth": 128, 00:24:25.138 "io_size": 4096, 00:24:25.138 "runtime": 1.023225, 00:24:25.138 "iops": 3540.7657162403184, 00:24:25.138 "mibps": 13.831116079063744, 00:24:25.138 "io_failed": 0, 00:24:25.138 "io_timeout": 0, 00:24:25.138 "avg_latency_us": 35786.44394475624, 00:24:25.138 "min_latency_us": 7621.594074074074, 00:24:25.138 "max_latency_us": 30292.195555555554 00:24:25.138 } 00:24:25.138 ], 00:24:25.138 "core_count": 1 00:24:25.139 } 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 635759 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 635759 ']' 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 635759 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 635759 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 635759' 00:24:25.139 killing process with pid 635759 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 635759 00:24:25.139 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.139 00:24:25.139 Latency(us) 00:24:25.139 [2024-11-26T17:20:13.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.139 [2024-11-26T17:20:13.150Z] =================================================================================================================== 00:24:25.139 [2024-11-26T17:20:13.150Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.139 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 635759 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 635469 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 635469 ']' 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 635469 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 635469 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 635469' 00:24:25.397 killing process with pid 635469 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 635469 00:24:25.397 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 635469 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=636154 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 636154 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 636154 ']' 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.655 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.655 [2024-11-26 18:20:13.650171] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:25.655 [2024-11-26 18:20:13.650265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.912 [2024-11-26 18:20:13.722188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.912 [2024-11-26 18:20:13.779869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.912 [2024-11-26 18:20:13.779936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.912 [2024-11-26 18:20:13.779950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.912 [2024-11-26 18:20:13.779961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.912 [2024-11-26 18:20:13.779970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.912 [2024-11-26 18:20:13.780607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.912 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.913 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:25.913 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.913 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.913 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.170 [2024-11-26 18:20:13.928052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.170 malloc0 00:24:26.170 [2024-11-26 18:20:13.960037] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:26.170 [2024-11-26 18:20:13.960315] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=636181 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 636181 /var/tmp/bdevperf.sock 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 636181 ']' 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.170 18:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.170 [2024-11-26 18:20:14.031076] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:26.170 [2024-11-26 18:20:14.031138] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid636181 ] 00:24:26.170 [2024-11-26 18:20:14.096028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.170 [2024-11-26 18:20:14.153794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.428 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.428 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.428 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.JSIqjWZCxi 00:24:26.686 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:26.943 [2024-11-26 18:20:14.881064] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.201 nvme0n1 00:24:27.201 18:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.201 Running I/O for 1 seconds... 00:24:28.133 3426.00 IOPS, 13.38 MiB/s 00:24:28.133 Latency(us) 00:24:28.133 [2024-11-26T17:20:16.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.133 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:28.133 Verification LBA range: start 0x0 length 0x2000 00:24:28.133 nvme0n1 : 1.02 3487.37 13.62 0.00 0.00 36356.48 7184.69 45632.47 00:24:28.133 [2024-11-26T17:20:16.144Z] =================================================================================================================== 00:24:28.133 [2024-11-26T17:20:16.144Z] Total : 3487.37 13.62 0.00 0.00 36356.48 7184.69 45632.47 00:24:28.133 { 00:24:28.133 "results": [ 00:24:28.133 { 00:24:28.133 "job": "nvme0n1", 00:24:28.133 "core_mask": "0x2", 00:24:28.133 "workload": "verify", 00:24:28.133 "status": "finished", 00:24:28.133 "verify_range": { 00:24:28.133 "start": 0, 00:24:28.133 "length": 8192 00:24:28.133 }, 00:24:28.133 "queue_depth": 128, 00:24:28.133 "io_size": 4096, 00:24:28.133 "runtime": 1.019393, 00:24:28.133 "iops": 3487.3694443654213, 00:24:28.133 "mibps": 13.622536892052427, 00:24:28.133 "io_failed": 0, 00:24:28.133 "io_timeout": 0, 00:24:28.133 "avg_latency_us": 36356.47920654269, 00:24:28.133 "min_latency_us": 7184.687407407408, 00:24:28.133 "max_latency_us": 45632.474074074074 00:24:28.133 } 00:24:28.133 ], 00:24:28.133 "core_count": 1 00:24:28.133 } 00:24:28.133 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:28.133 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.133 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.391 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.391 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:28.391 "subsystems": [ 00:24:28.391 { 00:24:28.391 "subsystem": "keyring", 00:24:28.391 "config": [ 00:24:28.391 { 00:24:28.391 "method": "keyring_file_add_key", 00:24:28.391 "params": { 00:24:28.391 "name": "key0", 00:24:28.391 "path": "/tmp/tmp.JSIqjWZCxi" 00:24:28.391 } 00:24:28.391 } 00:24:28.391 ] 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "subsystem": "iobuf", 00:24:28.391 "config": [ 00:24:28.391 { 00:24:28.391 "method": "iobuf_set_options", 00:24:28.391 "params": { 00:24:28.391 "small_pool_count": 8192, 00:24:28.391 "large_pool_count": 1024, 00:24:28.391 "small_bufsize": 8192, 00:24:28.391 "large_bufsize": 135168, 00:24:28.391 "enable_numa": false 00:24:28.391 } 00:24:28.391 } 00:24:28.391 ] 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "subsystem": "sock", 00:24:28.391 "config": [ 00:24:28.391 { 00:24:28.391 "method": "sock_set_default_impl", 00:24:28.391 "params": { 00:24:28.391 "impl_name": "posix" 00:24:28.391 } 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "method": "sock_impl_set_options", 00:24:28.391 "params": { 00:24:28.391 "impl_name": "ssl", 00:24:28.391 "recv_buf_size": 4096, 00:24:28.391 "send_buf_size": 4096, 00:24:28.391 "enable_recv_pipe": true, 00:24:28.391 "enable_quickack": false, 00:24:28.391 "enable_placement_id": 0, 00:24:28.391 "enable_zerocopy_send_server": true, 00:24:28.391 "enable_zerocopy_send_client": false, 00:24:28.391 "zerocopy_threshold": 0, 00:24:28.391 "tls_version": 0, 00:24:28.391 "enable_ktls": false 00:24:28.391 } 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "method": "sock_impl_set_options", 00:24:28.391 "params": { 00:24:28.391 "impl_name": "posix", 00:24:28.391 "recv_buf_size": 2097152, 00:24:28.391 "send_buf_size": 2097152, 00:24:28.391 "enable_recv_pipe": true, 00:24:28.391 "enable_quickack": false, 00:24:28.391 "enable_placement_id": 0, 00:24:28.391 "enable_zerocopy_send_server": true, 00:24:28.391 "enable_zerocopy_send_client": false, 00:24:28.391 "zerocopy_threshold": 0, 00:24:28.391 "tls_version": 0, 00:24:28.391 "enable_ktls": false 00:24:28.391 } 00:24:28.391 } 00:24:28.391 ] 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "subsystem": "vmd", 00:24:28.391 "config": [] 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "subsystem": "accel", 00:24:28.391 "config": [ 00:24:28.391 { 00:24:28.391 "method": "accel_set_options", 00:24:28.391 "params": { 00:24:28.391 "small_cache_size": 128, 00:24:28.391 "large_cache_size": 16, 00:24:28.391 "task_count": 2048, 00:24:28.391 "sequence_count": 2048, 00:24:28.391 "buf_count": 2048 00:24:28.391 } 00:24:28.391 } 00:24:28.391 ] 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "subsystem": "bdev", 00:24:28.391 "config": [ 00:24:28.391 { 00:24:28.391 "method": "bdev_set_options", 00:24:28.391 "params": { 00:24:28.391 "bdev_io_pool_size": 65535, 00:24:28.391 "bdev_io_cache_size": 256, 00:24:28.391 "bdev_auto_examine": true, 00:24:28.391 "iobuf_small_cache_size": 128, 00:24:28.391 "iobuf_large_cache_size": 16 00:24:28.391 } 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "method": "bdev_raid_set_options", 00:24:28.391 "params": { 00:24:28.391 "process_window_size_kb": 1024, 00:24:28.391 "process_max_bandwidth_mb_sec": 0 00:24:28.391 } 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "method": "bdev_iscsi_set_options", 00:24:28.391 "params": { 00:24:28.391 "timeout_sec": 30 00:24:28.391 } 00:24:28.391 }, 00:24:28.391 { 00:24:28.391 "method": "bdev_nvme_set_options", 00:24:28.391 "params": { 00:24:28.391 "action_on_timeout": "none", 00:24:28.391 "timeout_us": 0, 00:24:28.391 "timeout_admin_us": 0, 00:24:28.391 "keep_alive_timeout_ms": 10000, 00:24:28.391 "arbitration_burst": 0, 00:24:28.391 "low_priority_weight": 0, 00:24:28.391 "medium_priority_weight": 0, 00:24:28.391 "high_priority_weight": 0, 00:24:28.391 "nvme_adminq_poll_period_us": 10000, 00:24:28.391 "nvme_ioq_poll_period_us": 0, 00:24:28.391 "io_queue_requests": 0, 00:24:28.391 "delay_cmd_submit": true, 00:24:28.391 "transport_retry_count": 4, 00:24:28.391 "bdev_retry_count": 3, 00:24:28.391 "transport_ack_timeout": 0, 00:24:28.391 "ctrlr_loss_timeout_sec": 0, 00:24:28.391 "reconnect_delay_sec": 0, 00:24:28.392 "fast_io_fail_timeout_sec": 0, 00:24:28.392 "disable_auto_failback": false, 00:24:28.392 "generate_uuids": false, 00:24:28.392 "transport_tos": 0, 00:24:28.392 "nvme_error_stat": false, 00:24:28.392 "rdma_srq_size": 0, 00:24:28.392 "io_path_stat": false, 00:24:28.392 "allow_accel_sequence": false, 00:24:28.392 "rdma_max_cq_size": 0, 00:24:28.392 "rdma_cm_event_timeout_ms": 0, 00:24:28.392 "dhchap_digests": [ 00:24:28.392 "sha256", 00:24:28.392 "sha384", 00:24:28.392 "sha512" 00:24:28.392 ], 00:24:28.392 "dhchap_dhgroups": [ 00:24:28.392 "null", 00:24:28.392 "ffdhe2048", 00:24:28.392 "ffdhe3072", 00:24:28.392 "ffdhe4096", 00:24:28.392 "ffdhe6144", 00:24:28.392 "ffdhe8192" 00:24:28.392 ] 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "bdev_nvme_set_hotplug", 00:24:28.392 "params": { 00:24:28.392 "period_us": 100000, 00:24:28.392 "enable": false 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "bdev_malloc_create", 00:24:28.392 "params": { 00:24:28.392 "name": "malloc0", 00:24:28.392 "num_blocks": 8192, 00:24:28.392 "block_size": 4096, 00:24:28.392 "physical_block_size": 4096, 00:24:28.392 "uuid": "2f4fe46b-5ebf-428a-8551-ede4e2e7dbdd", 00:24:28.392 "optimal_io_boundary": 0, 00:24:28.392 "md_size": 0, 00:24:28.392 "dif_type": 0, 00:24:28.392 "dif_is_head_of_md": false, 00:24:28.392 "dif_pi_format": 0 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "bdev_wait_for_examine" 00:24:28.392 } 00:24:28.392 ] 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "subsystem": "nbd", 00:24:28.392 "config": [] 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "subsystem": "scheduler", 00:24:28.392 "config": [ 00:24:28.392 { 00:24:28.392 "method": "framework_set_scheduler", 00:24:28.392 "params": { 00:24:28.392 "name": "static" 00:24:28.392 } 00:24:28.392 } 00:24:28.392 ] 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "subsystem": "nvmf", 00:24:28.392 "config": [ 00:24:28.392 { 00:24:28.392 "method": "nvmf_set_config", 00:24:28.392 "params": { 00:24:28.392 "discovery_filter": "match_any", 00:24:28.392 "admin_cmd_passthru": { 00:24:28.392 "identify_ctrlr": false 00:24:28.392 }, 00:24:28.392 "dhchap_digests": [ 00:24:28.392 "sha256", 00:24:28.392 "sha384", 00:24:28.392 "sha512" 00:24:28.392 ], 00:24:28.392 "dhchap_dhgroups": [ 00:24:28.392 "null", 00:24:28.392 "ffdhe2048", 00:24:28.392 "ffdhe3072", 00:24:28.392 "ffdhe4096", 00:24:28.392 "ffdhe6144", 00:24:28.392 "ffdhe8192" 00:24:28.392 ] 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "nvmf_set_max_subsystems", 00:24:28.392 "params": { 00:24:28.392 "max_subsystems": 1024 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "nvmf_set_crdt", 00:24:28.392 "params": { 00:24:28.392 "crdt1": 0, 00:24:28.392 "crdt2": 0, 00:24:28.392 "crdt3": 0 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "nvmf_create_transport", 00:24:28.392 "params": { 00:24:28.392 "trtype": "TCP", 00:24:28.392 "max_queue_depth": 128, 00:24:28.392 "max_io_qpairs_per_ctrlr": 127, 00:24:28.392 "in_capsule_data_size": 4096, 00:24:28.392 "max_io_size": 131072, 00:24:28.392 "io_unit_size": 131072, 00:24:28.392 "max_aq_depth": 128, 00:24:28.392 "num_shared_buffers": 511, 00:24:28.392 "buf_cache_size": 4294967295, 00:24:28.392 "dif_insert_or_strip": false, 00:24:28.392 "zcopy": false, 00:24:28.392 "c2h_success": false, 00:24:28.392 "sock_priority": 0, 00:24:28.392 "abort_timeout_sec": 1, 00:24:28.392 "ack_timeout": 0, 00:24:28.392 "data_wr_pool_size": 0 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "nvmf_create_subsystem", 00:24:28.392 "params": { 00:24:28.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.392 "allow_any_host": false, 00:24:28.392 "serial_number": "00000000000000000000", 00:24:28.392 "model_number": "SPDK bdev Controller", 00:24:28.392 "max_namespaces": 32, 00:24:28.392 "min_cntlid": 1, 00:24:28.392 "max_cntlid": 65519, 00:24:28.392 "ana_reporting": false 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "nvmf_subsystem_add_host", 00:24:28.392 "params": { 00:24:28.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.392 "host": "nqn.2016-06.io.spdk:host1", 00:24:28.392 "psk": "key0" 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "nvmf_subsystem_add_ns", 00:24:28.392 "params": { 00:24:28.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.392 "namespace": { 00:24:28.392 "nsid": 1, 00:24:28.392 "bdev_name": "malloc0", 00:24:28.392 "nguid": "2F4FE46B5EBF428A8551EDE4E2E7DBDD", 00:24:28.392 "uuid": "2f4fe46b-5ebf-428a-8551-ede4e2e7dbdd", 00:24:28.392 "no_auto_visible": false 00:24:28.392 } 00:24:28.392 } 00:24:28.392 }, 00:24:28.392 { 00:24:28.392 "method": "nvmf_subsystem_add_listener", 00:24:28.392 "params": { 00:24:28.392 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.392 "listen_address": { 00:24:28.392 "trtype": "TCP", 00:24:28.392 "adrfam": "IPv4", 00:24:28.392 "traddr": "10.0.0.2", 00:24:28.392 "trsvcid": "4420" 00:24:28.392 }, 00:24:28.392 "secure_channel": false, 00:24:28.392 "sock_impl": "ssl" 00:24:28.392 } 00:24:28.392 } 00:24:28.392 ] 00:24:28.392 } 00:24:28.392 ] 00:24:28.392 }' 00:24:28.392 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:28.650 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:28.650 "subsystems": [ 00:24:28.650 { 00:24:28.650 "subsystem": "keyring", 00:24:28.650 "config": [ 00:24:28.650 { 00:24:28.650 "method": "keyring_file_add_key", 00:24:28.650 "params": { 00:24:28.650 "name": "key0", 00:24:28.650 "path": "/tmp/tmp.JSIqjWZCxi" 00:24:28.650 } 00:24:28.650 } 00:24:28.650 ] 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "subsystem": "iobuf", 00:24:28.650 "config": [ 00:24:28.650 { 00:24:28.650 "method": "iobuf_set_options", 00:24:28.650 "params": { 00:24:28.650 "small_pool_count": 8192, 00:24:28.650 "large_pool_count": 1024, 00:24:28.650 "small_bufsize": 8192, 00:24:28.650 "large_bufsize": 135168, 00:24:28.650 "enable_numa": false 00:24:28.650 } 00:24:28.650 } 00:24:28.650 ] 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "subsystem": "sock", 00:24:28.650 "config": [ 00:24:28.650 { 00:24:28.650 "method": "sock_set_default_impl", 00:24:28.650 "params": { 00:24:28.650 "impl_name": "posix" 00:24:28.650 } 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "method": "sock_impl_set_options", 00:24:28.650 "params": { 00:24:28.650 "impl_name": "ssl", 00:24:28.650 "recv_buf_size": 4096, 00:24:28.650 "send_buf_size": 4096, 00:24:28.650 "enable_recv_pipe": true, 00:24:28.650 "enable_quickack": false, 00:24:28.650 "enable_placement_id": 0, 00:24:28.650 "enable_zerocopy_send_server": true, 00:24:28.650 "enable_zerocopy_send_client": false, 00:24:28.650 "zerocopy_threshold": 0, 00:24:28.650 "tls_version": 0, 00:24:28.650 "enable_ktls": false 00:24:28.650 } 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "method": "sock_impl_set_options", 00:24:28.650 "params": { 00:24:28.650 "impl_name": "posix", 00:24:28.650 "recv_buf_size": 2097152, 00:24:28.650 "send_buf_size": 2097152, 00:24:28.650 "enable_recv_pipe": true, 00:24:28.650 "enable_quickack": false, 00:24:28.650 "enable_placement_id": 0, 00:24:28.650 "enable_zerocopy_send_server": true, 00:24:28.650 "enable_zerocopy_send_client": false, 00:24:28.650 "zerocopy_threshold": 0, 00:24:28.650 "tls_version": 0, 00:24:28.650 "enable_ktls": false 00:24:28.650 } 00:24:28.650 } 00:24:28.650 ] 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "subsystem": "vmd", 00:24:28.650 "config": [] 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "subsystem": "accel", 00:24:28.650 "config": [ 00:24:28.650 { 00:24:28.650 "method": "accel_set_options", 00:24:28.650 "params": { 00:24:28.650 "small_cache_size": 128, 00:24:28.650 "large_cache_size": 16, 00:24:28.650 "task_count": 2048, 00:24:28.650 "sequence_count": 2048, 00:24:28.650 "buf_count": 2048 00:24:28.650 } 00:24:28.650 } 00:24:28.650 ] 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "subsystem": "bdev", 00:24:28.650 "config": [ 00:24:28.650 { 00:24:28.650 "method": "bdev_set_options", 00:24:28.650 "params": { 00:24:28.650 "bdev_io_pool_size": 65535, 00:24:28.650 "bdev_io_cache_size": 256, 00:24:28.650 "bdev_auto_examine": true, 00:24:28.650 "iobuf_small_cache_size": 128, 00:24:28.650 "iobuf_large_cache_size": 16 00:24:28.650 } 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "method": "bdev_raid_set_options", 00:24:28.650 "params": { 00:24:28.650 "process_window_size_kb": 1024, 00:24:28.650 "process_max_bandwidth_mb_sec": 0 00:24:28.650 } 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "method": "bdev_iscsi_set_options", 00:24:28.650 "params": { 00:24:28.650 "timeout_sec": 30 00:24:28.650 } 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "method": "bdev_nvme_set_options", 00:24:28.650 "params": { 00:24:28.650 "action_on_timeout": "none", 00:24:28.650 "timeout_us": 0, 00:24:28.650 "timeout_admin_us": 0, 00:24:28.650 "keep_alive_timeout_ms": 10000, 00:24:28.650 "arbitration_burst": 0, 00:24:28.650 "low_priority_weight": 0, 00:24:28.650 "medium_priority_weight": 0, 00:24:28.650 "high_priority_weight": 0, 00:24:28.650 "nvme_adminq_poll_period_us": 10000, 00:24:28.650 "nvme_ioq_poll_period_us": 0, 00:24:28.650 "io_queue_requests": 512, 00:24:28.650 "delay_cmd_submit": true, 00:24:28.650 "transport_retry_count": 4, 00:24:28.650 "bdev_retry_count": 3, 00:24:28.650 "transport_ack_timeout": 0, 00:24:28.650 "ctrlr_loss_timeout_sec": 0, 00:24:28.650 "reconnect_delay_sec": 0, 00:24:28.650 "fast_io_fail_timeout_sec": 0, 00:24:28.650 "disable_auto_failback": false, 00:24:28.650 "generate_uuids": false, 00:24:28.650 "transport_tos": 0, 00:24:28.650 "nvme_error_stat": false, 00:24:28.650 "rdma_srq_size": 0, 00:24:28.650 "io_path_stat": false, 00:24:28.650 "allow_accel_sequence": false, 00:24:28.650 "rdma_max_cq_size": 0, 00:24:28.650 "rdma_cm_event_timeout_ms": 0, 00:24:28.650 "dhchap_digests": [ 00:24:28.650 "sha256", 00:24:28.650 "sha384", 00:24:28.650 "sha512" 00:24:28.650 ], 00:24:28.650 "dhchap_dhgroups": [ 00:24:28.650 "null", 00:24:28.650 "ffdhe2048", 00:24:28.650 "ffdhe3072", 00:24:28.650 "ffdhe4096", 00:24:28.650 "ffdhe6144", 00:24:28.650 "ffdhe8192" 00:24:28.650 ] 00:24:28.650 } 00:24:28.650 }, 00:24:28.650 { 00:24:28.650 "method": "bdev_nvme_attach_controller", 00:24:28.650 "params": { 00:24:28.651 "name": "nvme0", 00:24:28.651 "trtype": "TCP", 00:24:28.651 "adrfam": "IPv4", 00:24:28.651 "traddr": "10.0.0.2", 00:24:28.651 "trsvcid": "4420", 00:24:28.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.651 "prchk_reftag": false, 00:24:28.651 "prchk_guard": false, 00:24:28.651 "ctrlr_loss_timeout_sec": 0, 00:24:28.651 "reconnect_delay_sec": 0, 00:24:28.651 "fast_io_fail_timeout_sec": 0, 00:24:28.651 "psk": "key0", 00:24:28.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.651 "hdgst": false, 00:24:28.651 "ddgst": false, 00:24:28.651 "multipath": "multipath" 00:24:28.651 } 00:24:28.651 }, 00:24:28.651 { 00:24:28.651 "method": "bdev_nvme_set_hotplug", 00:24:28.651 "params": { 00:24:28.651 "period_us": 100000, 00:24:28.651 "enable": false 00:24:28.651 } 00:24:28.651 }, 00:24:28.651 { 00:24:28.651 "method": "bdev_enable_histogram", 00:24:28.651 "params": { 00:24:28.651 "name": "nvme0n1", 00:24:28.651 "enable": true 00:24:28.651 } 00:24:28.651 }, 00:24:28.651 { 00:24:28.651 "method": "bdev_wait_for_examine" 00:24:28.651 } 00:24:28.651 ] 00:24:28.651 }, 00:24:28.651 { 00:24:28.651 "subsystem": "nbd", 00:24:28.651 "config": [] 00:24:28.651 } 00:24:28.651 ] 00:24:28.651 }' 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 636181 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 636181 ']' 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 636181 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 636181 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 636181' 00:24:28.651 killing process with pid 636181 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 636181 00:24:28.651 Received shutdown signal, test time was about 1.000000 seconds 00:24:28.651 00:24:28.651 Latency(us) 00:24:28.651 [2024-11-26T17:20:16.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.651 [2024-11-26T17:20:16.662Z] =================================================================================================================== 00:24:28.651 [2024-11-26T17:20:16.662Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.651 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 636181 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 636154 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 636154 ']' 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 636154 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 636154 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 636154' 00:24:28.909 killing process with pid 636154 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 636154 00:24:28.909 18:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 636154 00:24:29.167 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:29.167 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.167 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:29.167 "subsystems": [ 00:24:29.167 { 00:24:29.167 "subsystem": "keyring", 00:24:29.167 "config": [ 00:24:29.167 { 00:24:29.167 "method": "keyring_file_add_key", 00:24:29.167 "params": { 00:24:29.167 "name": "key0", 00:24:29.167 "path": "/tmp/tmp.JSIqjWZCxi" 00:24:29.167 } 00:24:29.167 } 00:24:29.167 ] 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "subsystem": "iobuf", 00:24:29.167 "config": [ 00:24:29.167 { 00:24:29.167 "method": "iobuf_set_options", 00:24:29.167 "params": { 00:24:29.167 "small_pool_count": 8192, 00:24:29.167 "large_pool_count": 1024, 00:24:29.167 "small_bufsize": 8192, 00:24:29.167 "large_bufsize": 135168, 00:24:29.167 "enable_numa": false 00:24:29.167 } 00:24:29.167 } 00:24:29.167 ] 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "subsystem": "sock", 00:24:29.167 "config": [ 00:24:29.167 { 00:24:29.167 "method": "sock_set_default_impl", 00:24:29.167 "params": { 00:24:29.167 "impl_name": "posix" 00:24:29.167 } 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "method": "sock_impl_set_options", 00:24:29.167 "params": { 00:24:29.167 "impl_name": "ssl", 00:24:29.167 "recv_buf_size": 4096, 00:24:29.167 "send_buf_size": 4096, 00:24:29.167 "enable_recv_pipe": true, 00:24:29.167 "enable_quickack": false, 00:24:29.167 "enable_placement_id": 0, 00:24:29.167 "enable_zerocopy_send_server": true, 00:24:29.167 "enable_zerocopy_send_client": false, 00:24:29.167 "zerocopy_threshold": 0, 00:24:29.167 "tls_version": 0, 00:24:29.167 "enable_ktls": false 00:24:29.167 } 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "method": "sock_impl_set_options", 00:24:29.167 "params": { 00:24:29.167 "impl_name": "posix", 00:24:29.167 "recv_buf_size": 2097152, 00:24:29.167 "send_buf_size": 2097152, 00:24:29.167 "enable_recv_pipe": true, 00:24:29.167 "enable_quickack": false, 00:24:29.167 "enable_placement_id": 0, 00:24:29.167 "enable_zerocopy_send_server": true, 00:24:29.167 "enable_zerocopy_send_client": false, 00:24:29.167 "zerocopy_threshold": 0, 00:24:29.167 "tls_version": 0, 00:24:29.167 "enable_ktls": false 00:24:29.167 } 00:24:29.167 } 00:24:29.167 ] 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "subsystem": "vmd", 00:24:29.167 "config": [] 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "subsystem": "accel", 00:24:29.167 "config": [ 00:24:29.167 { 00:24:29.167 "method": "accel_set_options", 00:24:29.167 "params": { 00:24:29.167 "small_cache_size": 128, 00:24:29.167 "large_cache_size": 16, 00:24:29.167 "task_count": 2048, 00:24:29.167 "sequence_count": 2048, 00:24:29.167 "buf_count": 2048 00:24:29.167 } 00:24:29.167 } 00:24:29.167 ] 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "subsystem": "bdev", 00:24:29.167 "config": [ 00:24:29.167 { 00:24:29.167 "method": "bdev_set_options", 00:24:29.167 "params": { 00:24:29.167 "bdev_io_pool_size": 65535, 00:24:29.167 "bdev_io_cache_size": 256, 00:24:29.167 "bdev_auto_examine": true, 00:24:29.167 "iobuf_small_cache_size": 128, 00:24:29.167 "iobuf_large_cache_size": 16 00:24:29.167 } 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "method": "bdev_raid_set_options", 00:24:29.167 "params": { 00:24:29.167 "process_window_size_kb": 1024, 00:24:29.167 "process_max_bandwidth_mb_sec": 0 00:24:29.167 } 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "method": "bdev_iscsi_set_options", 00:24:29.167 "params": { 00:24:29.167 "timeout_sec": 30 00:24:29.167 } 00:24:29.167 }, 00:24:29.167 { 00:24:29.167 "method": "bdev_nvme_set_options", 00:24:29.167 "params": { 00:24:29.167 "action_on_timeout": "none", 00:24:29.167 "timeout_us": 0, 00:24:29.167 "timeout_admin_us": 0, 00:24:29.167 "keep_alive_timeout_ms": 10000, 00:24:29.167 "arbitration_burst": 0, 00:24:29.167 "low_priority_weight": 0, 00:24:29.167 "medium_priority_weight": 0, 00:24:29.167 "high_priority_weight": 0, 00:24:29.167 "nvme_adminq_poll_period_us": 10000, 00:24:29.167 "nvme_ioq_poll_period_us": 0, 00:24:29.167 "io_queue_requests": 0, 00:24:29.167 "delay_cmd_submit": true, 00:24:29.167 "transport_retry_count": 4, 00:24:29.167 "bdev_retry_count": 3, 00:24:29.167 "transport_ack_timeout": 0, 00:24:29.167 "ctrlr_loss_timeout_sec": 0, 00:24:29.167 "reconnect_delay_sec": 0, 00:24:29.167 "fast_io_fail_timeout_sec": 0, 00:24:29.167 "disable_auto_failback": false, 00:24:29.167 "generate_uuids": false, 00:24:29.167 "transport_tos": 0, 00:24:29.167 "nvme_error_stat": false, 00:24:29.167 "rdma_srq_size": 0, 00:24:29.167 "io_path_stat": false, 00:24:29.167 "allow_accel_sequence": false, 00:24:29.168 "rdma_max_cq_size": 0, 00:24:29.168 "rdma_cm_event_timeout_ms": 0, 00:24:29.168 "dhchap_digests": [ 00:24:29.168 "sha256", 00:24:29.168 "sha384", 00:24:29.168 "sha512" 00:24:29.168 ], 00:24:29.168 "dhchap_dhgroups": [ 00:24:29.168 "null", 00:24:29.168 "ffdhe2048", 00:24:29.168 "ffdhe3072", 00:24:29.168 "ffdhe4096", 00:24:29.168 "ffdhe6144", 00:24:29.168 "ffdhe8192" 00:24:29.168 ] 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "bdev_nvme_set_hotplug", 00:24:29.168 "params": { 00:24:29.168 "period_us": 100000, 00:24:29.168 "enable": false 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "bdev_malloc_create", 00:24:29.168 "params": { 00:24:29.168 "name": "malloc0", 00:24:29.168 "num_blocks": 8192, 00:24:29.168 "block_size": 4096, 00:24:29.168 "physical_block_size": 4096, 00:24:29.168 "uuid": "2f4fe46b-5ebf-428a-8551-ede4e2e7dbdd", 00:24:29.168 "optimal_io_boundary": 0, 00:24:29.168 "md_size": 0, 00:24:29.168 "dif_type": 0, 00:24:29.168 "dif_is_head_of_md": false, 00:24:29.168 "dif_pi_format": 0 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "bdev_wait_for_examine" 00:24:29.168 } 00:24:29.168 ] 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "subsystem": "nbd", 00:24:29.168 "config": [] 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "subsystem": "scheduler", 00:24:29.168 "config": [ 00:24:29.168 { 00:24:29.168 "method": "framework_set_scheduler", 00:24:29.168 "params": { 00:24:29.168 "name": "static" 00:24:29.168 } 00:24:29.168 } 00:24:29.168 ] 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "subsystem": "nvmf", 00:24:29.168 "config": [ 00:24:29.168 { 00:24:29.168 "method": "nvmf_set_config", 00:24:29.168 "params": { 00:24:29.168 "discovery_filter": "match_any", 00:24:29.168 "admin_cmd_passthru": { 00:24:29.168 "identify_ctrlr": false 00:24:29.168 }, 00:24:29.168 "dhchap_digests": [ 00:24:29.168 "sha256", 00:24:29.168 "sha384", 00:24:29.168 "sha512" 00:24:29.168 ], 00:24:29.168 "dhchap_dhgroups": [ 00:24:29.168 "null", 00:24:29.168 "ffdhe2048", 00:24:29.168 "ffdhe3072", 00:24:29.168 "ffdhe4096", 00:24:29.168 "ffdhe6144", 00:24:29.168 "ffdhe8192" 00:24:29.168 ] 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "nvmf_set_max_subsystems", 00:24:29.168 "params": { 00:24:29.168 "max_subsystems": 1024 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "nvmf_set_crdt", 00:24:29.168 "params": { 00:24:29.168 "crdt1": 0, 00:24:29.168 "crdt2": 0, 00:24:29.168 "crdt3": 0 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "nvmf_create_transport", 00:24:29.168 "params": { 00:24:29.168 "trtype": "TCP", 00:24:29.168 "max_queue_depth": 128, 00:24:29.168 "max_io_qpairs_per_ctrlr": 127, 00:24:29.168 "in_capsule_data_size": 4096, 00:24:29.168 "max_io_size": 131072, 00:24:29.168 "io_unit_size": 131072, 00:24:29.168 "max_aq_depth": 128, 00:24:29.168 "num_shared_buffers": 511, 00:24:29.168 "buf_cache_size": 4294967295, 00:24:29.168 "dif_insert_or_strip": false, 00:24:29.168 "zcopy": false, 00:24:29.168 "c2h_success": false, 00:24:29.168 "sock_priority": 0, 00:24:29.168 "abort_timeout_sec": 1, 00:24:29.168 "ack_timeout": 0, 00:24:29.168 "data_wr_pool_size": 0 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "nvmf_create_subsystem", 00:24:29.168 "params": { 00:24:29.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.168 "allow_any_host": false, 00:24:29.168 "serial_number": "00000000000000000000", 00:24:29.168 "model_number": "SPDK bdev Controller", 00:24:29.168 "max_namespaces": 32, 00:24:29.168 "min_cntlid": 1, 00:24:29.168 "max_cntlid": 65519, 00:24:29.168 "ana_reporting": false 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "nvmf_subsystem_add_host", 00:24:29.168 "params": { 00:24:29.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.168 "host": "nqn.2016-06.io.spdk:host1", 00:24:29.168 "psk": "key0" 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "nvmf_subsystem_add_ns", 00:24:29.168 "params": { 00:24:29.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.168 "namespace": { 00:24:29.168 "nsid": 1, 00:24:29.168 "bdev_name": "malloc0", 00:24:29.168 "nguid": "2F4FE46B5EBF428A8551EDE4E2E7DBDD", 00:24:29.168 "uuid": "2f4fe46b-5ebf-428a-8551-ede4e2e7dbdd", 00:24:29.168 "no_auto_visible": false 00:24:29.168 } 00:24:29.168 } 00:24:29.168 }, 00:24:29.168 { 00:24:29.168 "method": "nvmf_subsystem_add_listener", 00:24:29.168 "params": { 00:24:29.168 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.168 "listen_address": { 00:24:29.168 "trtype": "TCP", 00:24:29.168 "adrfam": "IPv4", 00:24:29.168 "traddr": "10.0.0.2", 00:24:29.168 "trsvcid": "4420" 00:24:29.168 }, 00:24:29.168 "secure_channel": false, 00:24:29.168 "sock_impl": "ssl" 00:24:29.168 } 00:24:29.168 } 00:24:29.168 ] 00:24:29.168 } 00:24:29.168 ] 00:24:29.168 }' 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=636586 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 636586 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 636586 ']' 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.168 18:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.168 [2024-11-26 18:20:17.128705] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:29.168 [2024-11-26 18:20:17.128824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.427 [2024-11-26 18:20:17.201506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.427 [2024-11-26 18:20:17.256808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.427 [2024-11-26 18:20:17.256880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.427 [2024-11-26 18:20:17.256893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.427 [2024-11-26 18:20:17.256911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.427 [2024-11-26 18:20:17.256921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.427 [2024-11-26 18:20:17.257523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.685 [2024-11-26 18:20:17.504487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.685 [2024-11-26 18:20:17.536499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:29.685 [2024-11-26 18:20:17.536733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=636737 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 636737 /var/tmp/bdevperf.sock 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 636737 ']' 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.252 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:30.252 "subsystems": [ 00:24:30.252 { 00:24:30.252 "subsystem": "keyring", 00:24:30.252 "config": [ 00:24:30.252 { 00:24:30.252 "method": "keyring_file_add_key", 00:24:30.252 "params": { 00:24:30.252 "name": "key0", 00:24:30.252 "path": "/tmp/tmp.JSIqjWZCxi" 00:24:30.252 } 00:24:30.252 } 00:24:30.252 ] 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "subsystem": "iobuf", 00:24:30.252 "config": [ 00:24:30.252 { 00:24:30.252 "method": "iobuf_set_options", 00:24:30.252 "params": { 00:24:30.252 "small_pool_count": 8192, 00:24:30.252 "large_pool_count": 1024, 00:24:30.252 "small_bufsize": 8192, 00:24:30.252 "large_bufsize": 135168, 00:24:30.252 "enable_numa": false 00:24:30.252 } 00:24:30.252 } 00:24:30.252 ] 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "subsystem": "sock", 00:24:30.252 "config": [ 00:24:30.252 { 00:24:30.252 "method": "sock_set_default_impl", 00:24:30.252 "params": { 00:24:30.252 "impl_name": "posix" 00:24:30.252 } 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "method": "sock_impl_set_options", 00:24:30.252 "params": { 00:24:30.252 "impl_name": "ssl", 00:24:30.252 "recv_buf_size": 4096, 00:24:30.252 "send_buf_size": 4096, 00:24:30.252 "enable_recv_pipe": true, 00:24:30.252 "enable_quickack": false, 00:24:30.252 "enable_placement_id": 0, 00:24:30.252 "enable_zerocopy_send_server": true, 00:24:30.252 "enable_zerocopy_send_client": false, 00:24:30.252 "zerocopy_threshold": 0, 00:24:30.252 "tls_version": 0, 00:24:30.252 "enable_ktls": false 00:24:30.252 } 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "method": "sock_impl_set_options", 00:24:30.252 "params": { 00:24:30.252 "impl_name": "posix", 00:24:30.252 "recv_buf_size": 2097152, 00:24:30.252 "send_buf_size": 2097152, 00:24:30.252 "enable_recv_pipe": true, 00:24:30.252 "enable_quickack": false, 00:24:30.252 "enable_placement_id": 0, 00:24:30.252 "enable_zerocopy_send_server": true, 00:24:30.252 "enable_zerocopy_send_client": false, 00:24:30.252 "zerocopy_threshold": 0, 00:24:30.252 "tls_version": 0, 00:24:30.252 "enable_ktls": false 00:24:30.252 } 00:24:30.252 } 00:24:30.252 ] 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "subsystem": "vmd", 00:24:30.252 "config": [] 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "subsystem": "accel", 00:24:30.252 "config": [ 00:24:30.252 { 00:24:30.252 "method": "accel_set_options", 00:24:30.252 "params": { 00:24:30.252 "small_cache_size": 128, 00:24:30.252 "large_cache_size": 16, 00:24:30.252 "task_count": 2048, 00:24:30.252 "sequence_count": 2048, 00:24:30.252 "buf_count": 2048 00:24:30.252 } 00:24:30.252 } 00:24:30.252 ] 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "subsystem": "bdev", 00:24:30.252 "config": [ 00:24:30.252 { 00:24:30.252 "method": "bdev_set_options", 00:24:30.252 "params": { 00:24:30.252 "bdev_io_pool_size": 65535, 00:24:30.252 "bdev_io_cache_size": 256, 00:24:30.252 "bdev_auto_examine": true, 00:24:30.252 "iobuf_small_cache_size": 128, 00:24:30.252 "iobuf_large_cache_size": 16 00:24:30.252 } 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "method": "bdev_raid_set_options", 00:24:30.252 "params": { 00:24:30.252 "process_window_size_kb": 1024, 00:24:30.252 "process_max_bandwidth_mb_sec": 0 00:24:30.252 } 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "method": "bdev_iscsi_set_options", 00:24:30.252 "params": { 00:24:30.252 "timeout_sec": 30 00:24:30.252 } 00:24:30.252 }, 00:24:30.252 { 00:24:30.252 "method": "bdev_nvme_set_options", 00:24:30.252 "params": { 00:24:30.252 "action_on_timeout": "none", 00:24:30.252 "timeout_us": 0, 00:24:30.252 "timeout_admin_us": 0, 00:24:30.252 "keep_alive_timeout_ms": 10000, 00:24:30.252 "arbitration_burst": 0, 00:24:30.252 "low_priority_weight": 0, 00:24:30.252 "medium_priority_weight": 0, 00:24:30.252 "high_priority_weight": 0, 00:24:30.252 "nvme_adminq_poll_period_us": 10000, 00:24:30.252 "nvme_ioq_poll_period_us": 0, 00:24:30.252 "io_queue_requests": 512, 00:24:30.252 "delay_cmd_submit": true, 00:24:30.252 "transport_retry_count": 4, 00:24:30.252 "bdev_retry_count": 3, 00:24:30.252 "transport_ack_timeout": 0, 00:24:30.252 "ctrlr_loss_timeout_sec": 0, 00:24:30.252 "reconnect_delay_sec": 0, 00:24:30.252 "fast_io_fail_timeout_sec": 0, 00:24:30.252 "disable_auto_failback": false, 00:24:30.252 "generate_uuids": false, 00:24:30.252 "transport_tos": 0, 00:24:30.252 "nvme_error_stat": false, 00:24:30.252 "rdma_srq_size": 0, 00:24:30.252 "io_path_stat": false, 00:24:30.252 "allow_accel_sequence": false, 00:24:30.252 "rdma_max_cq_size": 0, 00:24:30.252 "rdma_cm_event_timeout_ms": 0, 00:24:30.252 "dhchap_digests": [ 00:24:30.252 "sha256", 00:24:30.252 "sha384", 00:24:30.252 "sha512" 00:24:30.252 ], 00:24:30.252 "dhchap_dhgroups": [ 00:24:30.252 "null", 00:24:30.252 "ffdhe2048", 00:24:30.252 "ffdhe3072", 00:24:30.252 "ffdhe4096", 00:24:30.252 "ffdhe6144", 00:24:30.252 "ffdhe8192" 00:24:30.252 ] 00:24:30.252 } 00:24:30.252 }, 00:24:30.253 { 00:24:30.253 "method": "bdev_nvme_attach_controller", 00:24:30.253 "params": { 00:24:30.253 "name": "nvme0", 00:24:30.253 "trtype": "TCP", 00:24:30.253 "adrfam": "IPv4", 00:24:30.253 "traddr": "10.0.0.2", 00:24:30.253 "trsvcid": "4420", 00:24:30.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.253 "prchk_reftag": false, 00:24:30.253 "prchk_guard": false, 00:24:30.253 "ctrlr_loss_timeout_sec": 0, 00:24:30.253 "reconnect_delay_sec": 0, 00:24:30.253 "fast_io_fail_timeout_sec": 0, 00:24:30.253 "psk": "key0", 00:24:30.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:30.253 "hdgst": false, 00:24:30.253 "ddgst": false, 00:24:30.253 "multipath": "multipath" 00:24:30.253 } 00:24:30.253 }, 00:24:30.253 { 00:24:30.253 "method": "bdev_nvme_set_hotplug", 00:24:30.253 "params": { 00:24:30.253 "period_us": 100000, 00:24:30.253 "enable": false 00:24:30.253 } 00:24:30.253 }, 00:24:30.253 { 00:24:30.253 "method": "bdev_enable_histogram", 00:24:30.253 "params": { 00:24:30.253 "name": "nvme0n1", 00:24:30.253 "enable": true 00:24:30.253 } 00:24:30.253 }, 00:24:30.253 { 00:24:30.253 "method": "bdev_wait_for_examine" 00:24:30.253 } 00:24:30.253 ] 00:24:30.253 }, 00:24:30.253 { 00:24:30.253 "subsystem": "nbd", 00:24:30.253 "config": [] 00:24:30.253 } 00:24:30.253 ] 00:24:30.253 }' 00:24:30.253 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.253 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.253 [2024-11-26 18:20:18.176462] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:30.253 [2024-11-26 18:20:18.176532] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid636737 ] 00:24:30.253 [2024-11-26 18:20:18.242111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.512 [2024-11-26 18:20:18.301714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.512 [2024-11-26 18:20:18.476329] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:30.770 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.770 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:30.770 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:30.770 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:31.028 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.028 18:20:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.028 Running I/O for 1 seconds... 00:24:32.402 3500.00 IOPS, 13.67 MiB/s 00:24:32.402 Latency(us) 00:24:32.402 [2024-11-26T17:20:20.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.402 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:32.402 Verification LBA range: start 0x0 length 0x2000 00:24:32.402 nvme0n1 : 1.02 3558.40 13.90 0.00 0.00 35632.09 7330.32 29903.83 00:24:32.402 [2024-11-26T17:20:20.413Z] =================================================================================================================== 00:24:32.402 [2024-11-26T17:20:20.413Z] Total : 3558.40 13.90 0.00 0.00 35632.09 7330.32 29903.83 00:24:32.402 { 00:24:32.402 "results": [ 00:24:32.402 { 00:24:32.402 "job": "nvme0n1", 00:24:32.402 "core_mask": "0x2", 00:24:32.402 "workload": "verify", 00:24:32.402 "status": "finished", 00:24:32.402 "verify_range": { 00:24:32.402 "start": 0, 00:24:32.402 "length": 8192 00:24:32.402 }, 00:24:32.402 "queue_depth": 128, 00:24:32.402 "io_size": 4096, 00:24:32.402 "runtime": 1.019841, 00:24:32.402 "iops": 3558.3978286811375, 00:24:32.402 "mibps": 13.899991518285693, 00:24:32.402 "io_failed": 0, 00:24:32.402 "io_timeout": 0, 00:24:32.402 "avg_latency_us": 35632.08566383964, 00:24:32.402 "min_latency_us": 7330.322962962963, 00:24:32.402 "max_latency_us": 29903.834074074075 00:24:32.402 } 00:24:32.402 ], 00:24:32.402 "core_count": 1 00:24:32.402 } 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:32.402 nvmf_trace.0 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 636737 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 636737 ']' 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 636737 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 636737 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 636737' 00:24:32.402 killing process with pid 636737 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 636737 00:24:32.402 Received shutdown signal, test time was about 1.000000 seconds 00:24:32.402 00:24:32.402 Latency(us) 00:24:32.402 [2024-11-26T17:20:20.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.402 [2024-11-26T17:20:20.413Z] =================================================================================================================== 00:24:32.402 [2024-11-26T17:20:20.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 636737 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.402 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.402 rmmod nvme_tcp 00:24:32.402 rmmod nvme_fabrics 00:24:32.402 rmmod nvme_keyring 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 636586 ']' 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 636586 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 636586 ']' 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 636586 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 636586 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 636586' 00:24:32.661 killing process with pid 636586 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 636586 00:24:32.661 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 636586 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.920 18:20:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Rkp8QaWXZZ /tmp/tmp.WLhSMjb7Ar /tmp/tmp.JSIqjWZCxi 00:24:34.827 00:24:34.827 real 1m23.189s 00:24:34.827 user 2m20.899s 00:24:34.827 sys 0m24.096s 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.827 ************************************ 00:24:34.827 END TEST nvmf_tls 00:24:34.827 ************************************ 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:34.827 ************************************ 00:24:34.827 START TEST nvmf_fips 00:24:34.827 ************************************ 00:24:34.827 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:35.085 * Looking for test storage... 00:24:35.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.085 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:35.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.086 --rc genhtml_branch_coverage=1 00:24:35.086 --rc genhtml_function_coverage=1 00:24:35.086 --rc genhtml_legend=1 00:24:35.086 --rc geninfo_all_blocks=1 00:24:35.086 --rc geninfo_unexecuted_blocks=1 00:24:35.086 00:24:35.086 ' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:35.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.086 --rc genhtml_branch_coverage=1 00:24:35.086 --rc genhtml_function_coverage=1 00:24:35.086 --rc genhtml_legend=1 00:24:35.086 --rc geninfo_all_blocks=1 00:24:35.086 --rc geninfo_unexecuted_blocks=1 00:24:35.086 00:24:35.086 ' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:35.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.086 --rc genhtml_branch_coverage=1 00:24:35.086 --rc genhtml_function_coverage=1 00:24:35.086 --rc genhtml_legend=1 00:24:35.086 --rc geninfo_all_blocks=1 00:24:35.086 --rc geninfo_unexecuted_blocks=1 00:24:35.086 00:24:35.086 ' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:35.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.086 --rc genhtml_branch_coverage=1 00:24:35.086 --rc genhtml_function_coverage=1 00:24:35.086 --rc genhtml_legend=1 00:24:35.086 --rc geninfo_all_blocks=1 00:24:35.086 --rc geninfo_unexecuted_blocks=1 00:24:35.086 00:24:35.086 ' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.086 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:35.087 18:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:35.087 Error setting digest 00:24:35.087 40A20131337F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:35.087 40A20131337F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.087 18:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.641 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:37.642 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:37.642 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:37.642 Found net devices under 0000:09:00.0: cvl_0_0 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:37.642 Found net devices under 0000:09:00.1: cvl_0_1 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:24:37.642 00:24:37.642 --- 10.0.0.2 ping statistics --- 00:24:37.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.642 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:24:37.642 00:24:37.642 --- 10.0.0.1 ping statistics --- 00:24:37.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.642 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:37.642 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=638984 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 638984 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 638984 ']' 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:37.643 [2024-11-26 18:20:25.394496] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:37.643 [2024-11-26 18:20:25.394579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.643 [2024-11-26 18:20:25.466329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.643 [2024-11-26 18:20:25.524965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.643 [2024-11-26 18:20:25.525023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.643 [2024-11-26 18:20:25.525036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.643 [2024-11-26 18:20:25.525047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.643 [2024-11-26 18:20:25.525056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.643 [2024-11-26 18:20:25.525636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.643 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ijW 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ijW 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ijW 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ijW 00:24:37.901 18:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:38.159 [2024-11-26 18:20:25.988130] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.159 [2024-11-26 18:20:26.004130] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.159 [2024-11-26 18:20:26.004398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.159 malloc0 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=639130 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 639130 /var/tmp/bdevperf.sock 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 639130 ']' 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:38.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.159 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:38.159 [2024-11-26 18:20:26.138017] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:38.159 [2024-11-26 18:20:26.138123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639130 ] 00:24:38.417 [2024-11-26 18:20:26.205369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.417 [2024-11-26 18:20:26.263614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.417 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.417 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:38.417 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ijW 00:24:38.675 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:38.932 [2024-11-26 18:20:26.895112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.191 TLSTESTn1 00:24:39.191 18:20:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.191 Running I/O for 10 seconds... 00:24:41.140 3526.00 IOPS, 13.77 MiB/s [2024-11-26T17:20:30.523Z] 3432.50 IOPS, 13.41 MiB/s [2024-11-26T17:20:31.456Z] 3484.67 IOPS, 13.61 MiB/s [2024-11-26T17:20:32.388Z] 3518.25 IOPS, 13.74 MiB/s [2024-11-26T17:20:33.320Z] 3529.40 IOPS, 13.79 MiB/s [2024-11-26T17:20:34.251Z] 3541.67 IOPS, 13.83 MiB/s [2024-11-26T17:20:35.184Z] 3543.71 IOPS, 13.84 MiB/s [2024-11-26T17:20:36.556Z] 3543.88 IOPS, 13.84 MiB/s [2024-11-26T17:20:37.120Z] 3535.44 IOPS, 13.81 MiB/s [2024-11-26T17:20:37.377Z] 3542.20 IOPS, 13.84 MiB/s 00:24:49.366 Latency(us) 00:24:49.366 [2024-11-26T17:20:37.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.366 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:49.366 Verification LBA range: start 0x0 length 0x2000 00:24:49.366 TLSTESTn1 : 10.02 3548.07 13.86 0.00 0.00 36017.18 6990.51 30292.20 00:24:49.366 [2024-11-26T17:20:37.377Z] =================================================================================================================== 00:24:49.366 [2024-11-26T17:20:37.377Z] Total : 3548.07 13.86 0.00 0.00 36017.18 6990.51 30292.20 00:24:49.366 { 00:24:49.366 "results": [ 00:24:49.366 { 00:24:49.366 "job": "TLSTESTn1", 00:24:49.366 "core_mask": "0x4", 00:24:49.366 "workload": "verify", 00:24:49.366 "status": "finished", 00:24:49.366 "verify_range": { 00:24:49.366 "start": 0, 00:24:49.366 "length": 8192 00:24:49.366 }, 00:24:49.366 "queue_depth": 128, 00:24:49.366 "io_size": 4096, 00:24:49.366 "runtime": 10.019253, 00:24:49.366 "iops": 3548.0689029411674, 00:24:49.366 "mibps": 13.859644152113935, 00:24:49.366 "io_failed": 0, 00:24:49.366 "io_timeout": 0, 00:24:49.366 "avg_latency_us": 36017.17557874733, 00:24:49.366 "min_latency_us": 6990.506666666667, 00:24:49.366 "max_latency_us": 30292.195555555554 00:24:49.366 } 00:24:49.366 ], 00:24:49.366 "core_count": 1 00:24:49.366 } 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:49.366 nvmf_trace.0 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 639130 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 639130 ']' 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 639130 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639130 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639130' 00:24:49.366 killing process with pid 639130 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 639130 00:24:49.366 Received shutdown signal, test time was about 10.000000 seconds 00:24:49.366 00:24:49.366 Latency(us) 00:24:49.366 [2024-11-26T17:20:37.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.366 [2024-11-26T17:20:37.377Z] =================================================================================================================== 00:24:49.366 [2024-11-26T17:20:37.377Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:49.366 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 639130 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:49.623 rmmod nvme_tcp 00:24:49.623 rmmod nvme_fabrics 00:24:49.623 rmmod nvme_keyring 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 638984 ']' 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 638984 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 638984 ']' 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 638984 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 638984 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 638984' 00:24:49.623 killing process with pid 638984 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 638984 00:24:49.623 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 638984 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.881 18:20:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ijW 00:24:52.419 00:24:52.419 real 0m17.072s 00:24:52.419 user 0m22.731s 00:24:52.419 sys 0m5.376s 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:52.419 ************************************ 00:24:52.419 END TEST nvmf_fips 00:24:52.419 ************************************ 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:52.419 ************************************ 00:24:52.419 START TEST nvmf_control_msg_list 00:24:52.419 ************************************ 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:52.419 * Looking for test storage... 00:24:52.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:24:52.419 18:20:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:52.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.419 --rc genhtml_branch_coverage=1 00:24:52.419 --rc genhtml_function_coverage=1 00:24:52.419 --rc genhtml_legend=1 00:24:52.419 --rc geninfo_all_blocks=1 00:24:52.419 --rc geninfo_unexecuted_blocks=1 00:24:52.419 00:24:52.419 ' 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:52.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.419 --rc genhtml_branch_coverage=1 00:24:52.419 --rc genhtml_function_coverage=1 00:24:52.419 --rc genhtml_legend=1 00:24:52.419 --rc geninfo_all_blocks=1 00:24:52.419 --rc geninfo_unexecuted_blocks=1 00:24:52.419 00:24:52.419 ' 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:52.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.419 --rc genhtml_branch_coverage=1 00:24:52.419 --rc genhtml_function_coverage=1 00:24:52.419 --rc genhtml_legend=1 00:24:52.419 --rc geninfo_all_blocks=1 00:24:52.419 --rc geninfo_unexecuted_blocks=1 00:24:52.419 00:24:52.419 ' 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:52.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.419 --rc genhtml_branch_coverage=1 00:24:52.419 --rc genhtml_function_coverage=1 00:24:52.419 --rc genhtml_legend=1 00:24:52.419 --rc geninfo_all_blocks=1 00:24:52.419 --rc geninfo_unexecuted_blocks=1 00:24:52.419 00:24:52.419 ' 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.419 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:52.420 18:20:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.321 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:54.322 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:54.322 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:54.322 Found net devices under 0000:09:00.0: cvl_0_0 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:54.322 Found net devices under 0000:09:00.1: cvl_0_1 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.322 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:24:54.323 00:24:54.323 --- 10.0.0.2 ping statistics --- 00:24:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.323 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:24:54.323 00:24:54.323 --- 10.0.0.1 ping statistics --- 00:24:54.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.323 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.323 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=642397 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 642397 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 642397 ']' 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.581 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.582 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.582 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.582 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.582 [2024-11-26 18:20:42.401179] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:24:54.582 [2024-11-26 18:20:42.401290] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.582 [2024-11-26 18:20:42.473038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.582 [2024-11-26 18:20:42.527686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.582 [2024-11-26 18:20:42.527745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.582 [2024-11-26 18:20:42.527778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.582 [2024-11-26 18:20:42.527790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.582 [2024-11-26 18:20:42.527799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.582 [2024-11-26 18:20:42.528368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 [2024-11-26 18:20:42.676372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 Malloc0 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:54.840 [2024-11-26 18:20:42.716205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=642422 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=642423 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=642424 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 642422 00:24:54.840 18:20:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.841 [2024-11-26 18:20:42.774736] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:54.841 [2024-11-26 18:20:42.785081] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:54.841 [2024-11-26 18:20:42.785356] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:56.213 Initializing NVMe Controllers 00:24:56.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:56.213 Initialization complete. Launching workers. 00:24:56.213 ======================================================== 00:24:56.213 Latency(us) 00:24:56.213 Device Information : IOPS MiB/s Average min max 00:24:56.213 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3009.00 11.75 331.91 153.53 706.77 00:24:56.213 ======================================================== 00:24:56.213 Total : 3009.00 11.75 331.91 153.53 706.77 00:24:56.213 00:24:56.213 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 642423 00:24:56.213 Initializing NVMe Controllers 00:24:56.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:56.213 Initialization complete. Launching workers. 00:24:56.213 ======================================================== 00:24:56.213 Latency(us) 00:24:56.213 Device Information : IOPS MiB/s Average min max 00:24:56.213 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2974.97 11.62 335.75 195.78 572.84 00:24:56.213 ======================================================== 00:24:56.213 Total : 2974.97 11.62 335.75 195.78 572.84 00:24:56.213 00:24:56.213 Initializing NVMe Controllers 00:24:56.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:56.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:56.213 Initialization complete. Launching workers. 00:24:56.213 ======================================================== 00:24:56.213 Latency(us) 00:24:56.213 Device Information : IOPS MiB/s Average min max 00:24:56.213 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2704.00 10.56 369.49 167.43 593.04 00:24:56.213 ======================================================== 00:24:56.213 Total : 2704.00 10.56 369.49 167.43 593.04 00:24:56.213 00:24:56.214 [2024-11-26 18:20:43.928712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7f1f0 is same with the state(6) to be set 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 642424 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:56.214 rmmod nvme_tcp 00:24:56.214 rmmod nvme_fabrics 00:24:56.214 rmmod nvme_keyring 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 642397 ']' 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 642397 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 642397 ']' 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 642397 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.214 18:20:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642397 00:24:56.214 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:56.214 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:56.214 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642397' 00:24:56.214 killing process with pid 642397 00:24:56.214 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 642397 00:24:56.214 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 642397 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.473 18:20:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.379 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:58.379 00:24:58.379 real 0m6.379s 00:24:58.379 user 0m5.465s 00:24:58.379 sys 0m2.696s 00:24:58.379 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.379 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:58.379 ************************************ 00:24:58.379 END TEST nvmf_control_msg_list 00:24:58.379 ************************************ 00:24:58.379 18:20:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:58.379 18:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.379 18:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.379 18:20:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:58.379 ************************************ 00:24:58.379 START TEST nvmf_wait_for_buf 00:24:58.379 ************************************ 00:24:58.379 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:58.637 * Looking for test storage... 00:24:58.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.637 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:58.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.638 --rc genhtml_branch_coverage=1 00:24:58.638 --rc genhtml_function_coverage=1 00:24:58.638 --rc genhtml_legend=1 00:24:58.638 --rc geninfo_all_blocks=1 00:24:58.638 --rc geninfo_unexecuted_blocks=1 00:24:58.638 00:24:58.638 ' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:58.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.638 --rc genhtml_branch_coverage=1 00:24:58.638 --rc genhtml_function_coverage=1 00:24:58.638 --rc genhtml_legend=1 00:24:58.638 --rc geninfo_all_blocks=1 00:24:58.638 --rc geninfo_unexecuted_blocks=1 00:24:58.638 00:24:58.638 ' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:58.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.638 --rc genhtml_branch_coverage=1 00:24:58.638 --rc genhtml_function_coverage=1 00:24:58.638 --rc genhtml_legend=1 00:24:58.638 --rc geninfo_all_blocks=1 00:24:58.638 --rc geninfo_unexecuted_blocks=1 00:24:58.638 00:24:58.638 ' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:58.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.638 --rc genhtml_branch_coverage=1 00:24:58.638 --rc genhtml_function_coverage=1 00:24:58.638 --rc genhtml_legend=1 00:24:58.638 --rc geninfo_all_blocks=1 00:24:58.638 --rc geninfo_unexecuted_blocks=1 00:24:58.638 00:24:58.638 ' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.638 18:20:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:01.170 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:01.170 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:01.170 Found net devices under 0000:09:00.0: cvl_0_0 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:01.170 Found net devices under 0000:09:00.1: cvl_0_1 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:01.170 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:25:01.171 00:25:01.171 --- 10.0.0.2 ping statistics --- 00:25:01.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.171 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:25:01.171 00:25:01.171 --- 10.0.0.1 ping statistics --- 00:25:01.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.171 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=644615 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 644615 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 644615 ']' 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.171 18:20:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.171 [2024-11-26 18:20:48.949217] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:25:01.171 [2024-11-26 18:20:48.949318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.171 [2024-11-26 18:20:49.019649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.171 [2024-11-26 18:20:49.073473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.171 [2024-11-26 18:20:49.073527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.171 [2024-11-26 18:20:49.073550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.171 [2024-11-26 18:20:49.073561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.171 [2024-11-26 18:20:49.073571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.171 [2024-11-26 18:20:49.074131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.171 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.171 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:01.171 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:01.171 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.171 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 Malloc0 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 [2024-11-26 18:20:49.323807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 [2024-11-26 18:20:49.348005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.429 18:20:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:01.429 [2024-11-26 18:20:49.430115] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:03.329 Initializing NVMe Controllers 00:25:03.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:03.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:03.329 Initialization complete. Launching workers. 00:25:03.329 ======================================================== 00:25:03.329 Latency(us) 00:25:03.329 Device Information : IOPS MiB/s Average min max 00:25:03.329 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32292.18 8020.73 63849.17 00:25:03.329 ======================================================== 00:25:03.329 Total : 129.00 16.12 32292.18 8020.73 63849.17 00:25:03.329 00:25:03.329 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:03.329 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.329 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:03.329 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:03.329 18:20:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.330 rmmod nvme_tcp 00:25:03.330 rmmod nvme_fabrics 00:25:03.330 rmmod nvme_keyring 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 644615 ']' 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 644615 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 644615 ']' 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 644615 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 644615 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 644615' 00:25:03.330 killing process with pid 644615 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 644615 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 644615 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.330 18:20:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.867 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:05.868 00:25:05.868 real 0m7.001s 00:25:05.868 user 0m3.337s 00:25:05.868 sys 0m2.135s 00:25:05.868 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.868 18:20:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.868 ************************************ 00:25:05.868 END TEST nvmf_wait_for_buf 00:25:05.868 ************************************ 00:25:05.868 18:20:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:25:05.868 18:20:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:25:05.868 18:20:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:25:05.868 18:20:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:25:05.868 18:20:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:25:05.868 18:20:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:07.769 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:07.769 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:07.769 Found net devices under 0000:09:00.0: cvl_0_0 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:07.769 Found net devices under 0000:09:00.1: cvl_0_1 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:07.769 ************************************ 00:25:07.769 START TEST nvmf_perf_adq 00:25:07.769 ************************************ 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:25:07.769 * Looking for test storage... 00:25:07.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.769 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:07.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.770 --rc genhtml_branch_coverage=1 00:25:07.770 --rc genhtml_function_coverage=1 00:25:07.770 --rc genhtml_legend=1 00:25:07.770 --rc geninfo_all_blocks=1 00:25:07.770 --rc geninfo_unexecuted_blocks=1 00:25:07.770 00:25:07.770 ' 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:07.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.770 --rc genhtml_branch_coverage=1 00:25:07.770 --rc genhtml_function_coverage=1 00:25:07.770 --rc genhtml_legend=1 00:25:07.770 --rc geninfo_all_blocks=1 00:25:07.770 --rc geninfo_unexecuted_blocks=1 00:25:07.770 00:25:07.770 ' 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:07.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.770 --rc genhtml_branch_coverage=1 00:25:07.770 --rc genhtml_function_coverage=1 00:25:07.770 --rc genhtml_legend=1 00:25:07.770 --rc geninfo_all_blocks=1 00:25:07.770 --rc geninfo_unexecuted_blocks=1 00:25:07.770 00:25:07.770 ' 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:07.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.770 --rc genhtml_branch_coverage=1 00:25:07.770 --rc genhtml_function_coverage=1 00:25:07.770 --rc genhtml_legend=1 00:25:07.770 --rc geninfo_all_blocks=1 00:25:07.770 --rc geninfo_unexecuted_blocks=1 00:25:07.770 00:25:07.770 ' 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.770 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.771 18:20:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:10.303 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:10.303 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:10.303 Found net devices under 0000:09:00.0: cvl_0_0 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:10.303 Found net devices under 0000:09:00.1: cvl_0_1 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:10.303 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:25:10.304 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:10.304 18:20:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:10.562 18:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:12.463 18:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:17.778 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:17.778 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:17.778 Found net devices under 0000:09:00.0: cvl_0_0 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:17.778 Found net devices under 0000:09:00.1: cvl_0_1 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.778 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:25:17.779 00:25:17.779 --- 10.0.0.2 ping statistics --- 00:25:17.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.779 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:25:17.779 00:25:17.779 --- 10.0.0.1 ping statistics --- 00:25:17.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.779 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=649457 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 649457 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 649457 ']' 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.779 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:17.779 [2024-11-26 18:21:05.664378] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:25:17.779 [2024-11-26 18:21:05.664458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.779 [2024-11-26 18:21:05.740038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:18.036 [2024-11-26 18:21:05.804767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.036 [2024-11-26 18:21:05.804831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.036 [2024-11-26 18:21:05.804846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.036 [2024-11-26 18:21:05.804857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.036 [2024-11-26 18:21:05.804891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.036 [2024-11-26 18:21:05.806509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.036 [2024-11-26 18:21:05.806560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.036 [2024-11-26 18:21:05.806617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.036 [2024-11-26 18:21:05.806621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.036 18:21:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.293 [2024-11-26 18:21:06.097681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.293 Malloc1 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:18.293 [2024-11-26 18:21:06.162430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=649602 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:25:18.293 18:21:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:20.190 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:25:20.190 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.190 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:20.190 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.190 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:25:20.190 "tick_rate": 2700000000, 00:25:20.190 "poll_groups": [ 00:25:20.190 { 00:25:20.190 "name": "nvmf_tgt_poll_group_000", 00:25:20.190 "admin_qpairs": 1, 00:25:20.190 "io_qpairs": 1, 00:25:20.190 "current_admin_qpairs": 1, 00:25:20.190 "current_io_qpairs": 1, 00:25:20.190 "pending_bdev_io": 0, 00:25:20.190 "completed_nvme_io": 19652, 00:25:20.190 "transports": [ 00:25:20.190 { 00:25:20.190 "trtype": "TCP" 00:25:20.190 } 00:25:20.190 ] 00:25:20.190 }, 00:25:20.190 { 00:25:20.190 "name": "nvmf_tgt_poll_group_001", 00:25:20.190 "admin_qpairs": 0, 00:25:20.190 "io_qpairs": 1, 00:25:20.190 "current_admin_qpairs": 0, 00:25:20.190 "current_io_qpairs": 1, 00:25:20.190 "pending_bdev_io": 0, 00:25:20.190 "completed_nvme_io": 19863, 00:25:20.190 "transports": [ 00:25:20.190 { 00:25:20.190 "trtype": "TCP" 00:25:20.190 } 00:25:20.190 ] 00:25:20.190 }, 00:25:20.190 { 00:25:20.190 "name": "nvmf_tgt_poll_group_002", 00:25:20.190 "admin_qpairs": 0, 00:25:20.190 "io_qpairs": 1, 00:25:20.190 "current_admin_qpairs": 0, 00:25:20.190 "current_io_qpairs": 1, 00:25:20.190 "pending_bdev_io": 0, 00:25:20.190 "completed_nvme_io": 20285, 00:25:20.190 "transports": [ 00:25:20.190 { 00:25:20.190 "trtype": "TCP" 00:25:20.190 } 00:25:20.190 ] 00:25:20.190 }, 00:25:20.190 { 00:25:20.190 "name": "nvmf_tgt_poll_group_003", 00:25:20.190 "admin_qpairs": 0, 00:25:20.190 "io_qpairs": 1, 00:25:20.190 "current_admin_qpairs": 0, 00:25:20.190 "current_io_qpairs": 1, 00:25:20.190 "pending_bdev_io": 0, 00:25:20.190 "completed_nvme_io": 19763, 00:25:20.190 "transports": [ 00:25:20.190 { 00:25:20.190 "trtype": "TCP" 00:25:20.190 } 00:25:20.190 ] 00:25:20.190 } 00:25:20.190 ] 00:25:20.190 }' 00:25:20.190 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:25:20.190 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:25:20.448 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:25:20.448 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:25:20.448 18:21:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 649602 00:25:28.558 Initializing NVMe Controllers 00:25:28.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:28.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:28.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:28.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:28.558 Initialization complete. Launching workers. 00:25:28.558 ======================================================== 00:25:28.558 Latency(us) 00:25:28.558 Device Information : IOPS MiB/s Average min max 00:25:28.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10354.06 40.45 6181.67 2433.92 10650.75 00:25:28.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10413.26 40.68 6146.03 2456.95 10031.08 00:25:28.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10717.05 41.86 5972.89 2576.33 9835.54 00:25:28.558 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10354.76 40.45 6181.58 2429.77 10196.95 00:25:28.558 ======================================================== 00:25:28.558 Total : 41839.14 163.43 6119.30 2429.77 10650.75 00:25:28.558 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.558 rmmod nvme_tcp 00:25:28.558 rmmod nvme_fabrics 00:25:28.558 rmmod nvme_keyring 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 649457 ']' 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 649457 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 649457 ']' 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 649457 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 649457 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 649457' 00:25:28.558 killing process with pid 649457 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 649457 00:25:28.558 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 649457 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.817 18:21:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.359 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.359 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:25:31.359 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:25:31.359 18:21:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:25:31.359 18:21:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:25:33.265 18:21:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:38.550 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:38.550 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:38.550 Found net devices under 0000:09:00.0: cvl_0_0 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:38.550 Found net devices under 0000:09:00.1: cvl_0_1 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.550 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:38.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:25:38.551 00:25:38.551 --- 10.0.0.2 ping statistics --- 00:25:38.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.551 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:25:38.551 00:25:38.551 --- 10.0.0.1 ping statistics --- 00:25:38.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.551 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:38.551 net.core.busy_poll = 1 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:38.551 net.core.busy_read = 1 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=652728 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 652728 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 652728 ']' 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.551 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:38.551 [2024-11-26 18:21:26.552680] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:25:38.551 [2024-11-26 18:21:26.552780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.810 [2024-11-26 18:21:26.624248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.810 [2024-11-26 18:21:26.679479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.810 [2024-11-26 18:21:26.679534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.810 [2024-11-26 18:21:26.679562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.811 [2024-11-26 18:21:26.679572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.811 [2024-11-26 18:21:26.679581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.811 [2024-11-26 18:21:26.681017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.811 [2024-11-26 18:21:26.681125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.811 [2024-11-26 18:21:26.681219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.811 [2024-11-26 18:21:26.681227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:38.811 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.069 [2024-11-26 18:21:26.959564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.069 18:21:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.069 Malloc1 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:39.069 [2024-11-26 18:21:27.021030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=652755 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:25:39.069 18:21:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:41.086 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:25:41.086 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.086 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:41.086 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.086 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:25:41.086 "tick_rate": 2700000000, 00:25:41.086 "poll_groups": [ 00:25:41.086 { 00:25:41.086 "name": "nvmf_tgt_poll_group_000", 00:25:41.086 "admin_qpairs": 1, 00:25:41.086 "io_qpairs": 3, 00:25:41.086 "current_admin_qpairs": 1, 00:25:41.086 "current_io_qpairs": 3, 00:25:41.086 "pending_bdev_io": 0, 00:25:41.086 "completed_nvme_io": 26252, 00:25:41.086 "transports": [ 00:25:41.086 { 00:25:41.086 "trtype": "TCP" 00:25:41.086 } 00:25:41.086 ] 00:25:41.086 }, 00:25:41.086 { 00:25:41.086 "name": "nvmf_tgt_poll_group_001", 00:25:41.086 "admin_qpairs": 0, 00:25:41.086 "io_qpairs": 1, 00:25:41.086 "current_admin_qpairs": 0, 00:25:41.086 "current_io_qpairs": 1, 00:25:41.086 "pending_bdev_io": 0, 00:25:41.086 "completed_nvme_io": 24831, 00:25:41.086 "transports": [ 00:25:41.086 { 00:25:41.086 "trtype": "TCP" 00:25:41.086 } 00:25:41.086 ] 00:25:41.086 }, 00:25:41.086 { 00:25:41.086 "name": "nvmf_tgt_poll_group_002", 00:25:41.086 "admin_qpairs": 0, 00:25:41.086 "io_qpairs": 0, 00:25:41.086 "current_admin_qpairs": 0, 00:25:41.086 "current_io_qpairs": 0, 00:25:41.086 "pending_bdev_io": 0, 00:25:41.086 "completed_nvme_io": 0, 00:25:41.086 "transports": [ 00:25:41.086 { 00:25:41.086 "trtype": "TCP" 00:25:41.086 } 00:25:41.086 ] 00:25:41.086 }, 00:25:41.086 { 00:25:41.086 "name": "nvmf_tgt_poll_group_003", 00:25:41.086 "admin_qpairs": 0, 00:25:41.086 "io_qpairs": 0, 00:25:41.086 "current_admin_qpairs": 0, 00:25:41.086 "current_io_qpairs": 0, 00:25:41.086 "pending_bdev_io": 0, 00:25:41.086 "completed_nvme_io": 0, 00:25:41.086 "transports": [ 00:25:41.086 { 00:25:41.086 "trtype": "TCP" 00:25:41.086 } 00:25:41.086 ] 00:25:41.086 } 00:25:41.086 ] 00:25:41.086 }' 00:25:41.086 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:41.086 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:25:41.380 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:25:41.380 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:25:41.380 18:21:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 652755 00:25:49.485 Initializing NVMe Controllers 00:25:49.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:49.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:49.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:49.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:49.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:49.485 Initialization complete. Launching workers. 00:25:49.485 ======================================================== 00:25:49.485 Latency(us) 00:25:49.486 Device Information : IOPS MiB/s Average min max 00:25:49.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4233.17 16.54 15121.55 1729.91 62418.57 00:25:49.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13378.11 52.26 4784.21 1304.36 7243.70 00:25:49.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5283.67 20.64 12116.25 1695.93 62045.87 00:25:49.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4212.17 16.45 15197.45 2217.75 63179.19 00:25:49.486 ======================================================== 00:25:49.486 Total : 27107.12 105.89 9445.80 1304.36 63179.19 00:25:49.486 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.486 rmmod nvme_tcp 00:25:49.486 rmmod nvme_fabrics 00:25:49.486 rmmod nvme_keyring 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 652728 ']' 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 652728 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 652728 ']' 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 652728 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 652728 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 652728' 00:25:49.486 killing process with pid 652728 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 652728 00:25:49.486 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 652728 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.744 18:21:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:25:53.031 00:25:53.031 real 0m45.062s 00:25:53.031 user 2m41.199s 00:25:53.031 sys 0m9.062s 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:25:53.031 ************************************ 00:25:53.031 END TEST nvmf_perf_adq 00:25:53.031 ************************************ 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:53.031 ************************************ 00:25:53.031 START TEST nvmf_shutdown 00:25:53.031 ************************************ 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:53.031 * Looking for test storage... 00:25:53.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:53.031 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.032 --rc genhtml_branch_coverage=1 00:25:53.032 --rc genhtml_function_coverage=1 00:25:53.032 --rc genhtml_legend=1 00:25:53.032 --rc geninfo_all_blocks=1 00:25:53.032 --rc geninfo_unexecuted_blocks=1 00:25:53.032 00:25:53.032 ' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.032 --rc genhtml_branch_coverage=1 00:25:53.032 --rc genhtml_function_coverage=1 00:25:53.032 --rc genhtml_legend=1 00:25:53.032 --rc geninfo_all_blocks=1 00:25:53.032 --rc geninfo_unexecuted_blocks=1 00:25:53.032 00:25:53.032 ' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.032 --rc genhtml_branch_coverage=1 00:25:53.032 --rc genhtml_function_coverage=1 00:25:53.032 --rc genhtml_legend=1 00:25:53.032 --rc geninfo_all_blocks=1 00:25:53.032 --rc geninfo_unexecuted_blocks=1 00:25:53.032 00:25:53.032 ' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.032 --rc genhtml_branch_coverage=1 00:25:53.032 --rc genhtml_function_coverage=1 00:25:53.032 --rc genhtml_legend=1 00:25:53.032 --rc geninfo_all_blocks=1 00:25:53.032 --rc geninfo_unexecuted_blocks=1 00:25:53.032 00:25:53.032 ' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:53.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:53.032 ************************************ 00:25:53.032 START TEST nvmf_shutdown_tc1 00:25:53.032 ************************************ 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.032 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.033 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.033 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:53.033 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:53.033 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:25:53.033 18:21:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:55.564 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:55.564 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.564 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:55.564 Found net devices under 0000:09:00.0: cvl_0_0 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:55.565 Found net devices under 0000:09:00.1: cvl_0_1 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:55.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:55.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:25:55.565 00:25:55.565 --- 10.0.0.2 ping statistics --- 00:25:55.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.565 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:55.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:55.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:25:55.565 00:25:55.565 --- 10.0.0.1 ping statistics --- 00:25:55.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:55.565 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=656062 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 656062 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 656062 ']' 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:55.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:55.565 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:55.565 [2024-11-26 18:21:43.298840] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:25:55.565 [2024-11-26 18:21:43.298930] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.565 [2024-11-26 18:21:43.377920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.565 [2024-11-26 18:21:43.435227] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.565 [2024-11-26 18:21:43.435280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.566 [2024-11-26 18:21:43.435314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.566 [2024-11-26 18:21:43.435326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.566 [2024-11-26 18:21:43.435336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.566 [2024-11-26 18:21:43.436916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.566 [2024-11-26 18:21:43.436985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.566 [2024-11-26 18:21:43.437140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:55.566 [2024-11-26 18:21:43.437143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.566 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.566 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:55.566 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.566 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.566 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:55.824 [2024-11-26 18:21:43.581743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.824 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.825 18:21:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:55.825 Malloc1 00:25:55.825 [2024-11-26 18:21:43.678597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.825 Malloc2 00:25:55.825 Malloc3 00:25:55.825 Malloc4 00:25:56.083 Malloc5 00:25:56.083 Malloc6 00:25:56.083 Malloc7 00:25:56.083 Malloc8 00:25:56.083 Malloc9 00:25:56.341 Malloc10 00:25:56.341 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.341 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:25:56.341 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=656243 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 656243 /var/tmp/bdevperf.sock 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 656243 ']' 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.342 { 00:25:56.342 "params": { 00:25:56.342 "name": "Nvme$subsystem", 00:25:56.342 "trtype": "$TEST_TRANSPORT", 00:25:56.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.342 "adrfam": "ipv4", 00:25:56.342 "trsvcid": "$NVMF_PORT", 00:25:56.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.342 "hdgst": ${hdgst:-false}, 00:25:56.342 "ddgst": ${ddgst:-false} 00:25:56.342 }, 00:25:56.342 "method": "bdev_nvme_attach_controller" 00:25:56.342 } 00:25:56.342 EOF 00:25:56.342 )") 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.342 { 00:25:56.342 "params": { 00:25:56.342 "name": "Nvme$subsystem", 00:25:56.342 "trtype": "$TEST_TRANSPORT", 00:25:56.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.342 "adrfam": "ipv4", 00:25:56.342 "trsvcid": "$NVMF_PORT", 00:25:56.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.342 "hdgst": ${hdgst:-false}, 00:25:56.342 "ddgst": ${ddgst:-false} 00:25:56.342 }, 00:25:56.342 "method": "bdev_nvme_attach_controller" 00:25:56.342 } 00:25:56.342 EOF 00:25:56.342 )") 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.342 { 00:25:56.342 "params": { 00:25:56.342 "name": "Nvme$subsystem", 00:25:56.342 "trtype": "$TEST_TRANSPORT", 00:25:56.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.342 "adrfam": "ipv4", 00:25:56.342 "trsvcid": "$NVMF_PORT", 00:25:56.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.342 "hdgst": ${hdgst:-false}, 00:25:56.342 "ddgst": ${ddgst:-false} 00:25:56.342 }, 00:25:56.342 "method": "bdev_nvme_attach_controller" 00:25:56.342 } 00:25:56.342 EOF 00:25:56.342 )") 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.342 { 00:25:56.342 "params": { 00:25:56.342 "name": "Nvme$subsystem", 00:25:56.342 "trtype": "$TEST_TRANSPORT", 00:25:56.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.342 "adrfam": "ipv4", 00:25:56.342 "trsvcid": "$NVMF_PORT", 00:25:56.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.342 "hdgst": ${hdgst:-false}, 00:25:56.342 "ddgst": ${ddgst:-false} 00:25:56.342 }, 00:25:56.342 "method": "bdev_nvme_attach_controller" 00:25:56.342 } 00:25:56.342 EOF 00:25:56.342 )") 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.342 { 00:25:56.342 "params": { 00:25:56.342 "name": "Nvme$subsystem", 00:25:56.342 "trtype": "$TEST_TRANSPORT", 00:25:56.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.342 "adrfam": "ipv4", 00:25:56.342 "trsvcid": "$NVMF_PORT", 00:25:56.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.342 "hdgst": ${hdgst:-false}, 00:25:56.342 "ddgst": ${ddgst:-false} 00:25:56.342 }, 00:25:56.342 "method": "bdev_nvme_attach_controller" 00:25:56.342 } 00:25:56.342 EOF 00:25:56.342 )") 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.342 { 00:25:56.342 "params": { 00:25:56.342 "name": "Nvme$subsystem", 00:25:56.342 "trtype": "$TEST_TRANSPORT", 00:25:56.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.342 "adrfam": "ipv4", 00:25:56.342 "trsvcid": "$NVMF_PORT", 00:25:56.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.342 "hdgst": ${hdgst:-false}, 00:25:56.342 "ddgst": ${ddgst:-false} 00:25:56.342 }, 00:25:56.342 "method": "bdev_nvme_attach_controller" 00:25:56.342 } 00:25:56.342 EOF 00:25:56.342 )") 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.342 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.342 { 00:25:56.342 "params": { 00:25:56.343 "name": "Nvme$subsystem", 00:25:56.343 "trtype": "$TEST_TRANSPORT", 00:25:56.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "$NVMF_PORT", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.343 "hdgst": ${hdgst:-false}, 00:25:56.343 "ddgst": ${ddgst:-false} 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 } 00:25:56.343 EOF 00:25:56.343 )") 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.343 { 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme$subsystem", 00:25:56.343 "trtype": "$TEST_TRANSPORT", 00:25:56.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "$NVMF_PORT", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.343 "hdgst": ${hdgst:-false}, 00:25:56.343 "ddgst": ${ddgst:-false} 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 } 00:25:56.343 EOF 00:25:56.343 )") 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.343 { 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme$subsystem", 00:25:56.343 "trtype": "$TEST_TRANSPORT", 00:25:56.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "$NVMF_PORT", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.343 "hdgst": ${hdgst:-false}, 00:25:56.343 "ddgst": ${ddgst:-false} 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 } 00:25:56.343 EOF 00:25:56.343 )") 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:56.343 { 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme$subsystem", 00:25:56.343 "trtype": "$TEST_TRANSPORT", 00:25:56.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "$NVMF_PORT", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.343 "hdgst": ${hdgst:-false}, 00:25:56.343 "ddgst": ${ddgst:-false} 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 } 00:25:56.343 EOF 00:25:56.343 )") 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:56.343 18:21:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme1", 00:25:56.343 "trtype": "tcp", 00:25:56.343 "traddr": "10.0.0.2", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "4420", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:56.343 "hdgst": false, 00:25:56.343 "ddgst": false 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 },{ 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme2", 00:25:56.343 "trtype": "tcp", 00:25:56.343 "traddr": "10.0.0.2", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "4420", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:56.343 "hdgst": false, 00:25:56.343 "ddgst": false 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 },{ 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme3", 00:25:56.343 "trtype": "tcp", 00:25:56.343 "traddr": "10.0.0.2", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "4420", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:56.343 "hdgst": false, 00:25:56.343 "ddgst": false 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 },{ 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme4", 00:25:56.343 "trtype": "tcp", 00:25:56.343 "traddr": "10.0.0.2", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "4420", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:56.343 "hdgst": false, 00:25:56.343 "ddgst": false 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 },{ 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme5", 00:25:56.343 "trtype": "tcp", 00:25:56.343 "traddr": "10.0.0.2", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "4420", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:56.343 "hdgst": false, 00:25:56.343 "ddgst": false 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 },{ 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme6", 00:25:56.343 "trtype": "tcp", 00:25:56.343 "traddr": "10.0.0.2", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "4420", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:56.343 "hdgst": false, 00:25:56.343 "ddgst": false 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 },{ 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme7", 00:25:56.343 "trtype": "tcp", 00:25:56.343 "traddr": "10.0.0.2", 00:25:56.343 "adrfam": "ipv4", 00:25:56.343 "trsvcid": "4420", 00:25:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:56.343 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:56.343 "hdgst": false, 00:25:56.343 "ddgst": false 00:25:56.343 }, 00:25:56.343 "method": "bdev_nvme_attach_controller" 00:25:56.343 },{ 00:25:56.343 "params": { 00:25:56.343 "name": "Nvme8", 00:25:56.343 "trtype": "tcp", 00:25:56.344 "traddr": "10.0.0.2", 00:25:56.344 "adrfam": "ipv4", 00:25:56.344 "trsvcid": "4420", 00:25:56.344 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:56.344 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:56.344 "hdgst": false, 00:25:56.344 "ddgst": false 00:25:56.344 }, 00:25:56.344 "method": "bdev_nvme_attach_controller" 00:25:56.344 },{ 00:25:56.344 "params": { 00:25:56.344 "name": "Nvme9", 00:25:56.344 "trtype": "tcp", 00:25:56.344 "traddr": "10.0.0.2", 00:25:56.344 "adrfam": "ipv4", 00:25:56.344 "trsvcid": "4420", 00:25:56.344 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:56.344 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:56.344 "hdgst": false, 00:25:56.344 "ddgst": false 00:25:56.344 }, 00:25:56.344 "method": "bdev_nvme_attach_controller" 00:25:56.344 },{ 00:25:56.344 "params": { 00:25:56.344 "name": "Nvme10", 00:25:56.344 "trtype": "tcp", 00:25:56.344 "traddr": "10.0.0.2", 00:25:56.344 "adrfam": "ipv4", 00:25:56.344 "trsvcid": "4420", 00:25:56.344 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:56.344 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:56.344 "hdgst": false, 00:25:56.344 "ddgst": false 00:25:56.344 }, 00:25:56.344 "method": "bdev_nvme_attach_controller" 00:25:56.344 }' 00:25:56.344 [2024-11-26 18:21:44.208267] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:25:56.344 [2024-11-26 18:21:44.208384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:56.344 [2024-11-26 18:21:44.284254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.344 [2024-11-26 18:21:44.344498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 656243 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:25:58.243 18:21:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:25:59.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 656243 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 656062 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.615 { 00:25:59.615 "params": { 00:25:59.615 "name": "Nvme$subsystem", 00:25:59.615 "trtype": "$TEST_TRANSPORT", 00:25:59.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.615 "adrfam": "ipv4", 00:25:59.615 "trsvcid": "$NVMF_PORT", 00:25:59.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.615 "hdgst": ${hdgst:-false}, 00:25:59.615 "ddgst": ${ddgst:-false} 00:25:59.615 }, 00:25:59.615 "method": "bdev_nvme_attach_controller" 00:25:59.615 } 00:25:59.615 EOF 00:25:59.615 )") 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.615 { 00:25:59.615 "params": { 00:25:59.615 "name": "Nvme$subsystem", 00:25:59.615 "trtype": "$TEST_TRANSPORT", 00:25:59.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.615 "adrfam": "ipv4", 00:25:59.615 "trsvcid": "$NVMF_PORT", 00:25:59.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.615 "hdgst": ${hdgst:-false}, 00:25:59.615 "ddgst": ${ddgst:-false} 00:25:59.615 }, 00:25:59.615 "method": "bdev_nvme_attach_controller" 00:25:59.615 } 00:25:59.615 EOF 00:25:59.615 )") 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.615 { 00:25:59.615 "params": { 00:25:59.615 "name": "Nvme$subsystem", 00:25:59.615 "trtype": "$TEST_TRANSPORT", 00:25:59.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.615 "adrfam": "ipv4", 00:25:59.615 "trsvcid": "$NVMF_PORT", 00:25:59.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.615 "hdgst": ${hdgst:-false}, 00:25:59.615 "ddgst": ${ddgst:-false} 00:25:59.615 }, 00:25:59.615 "method": "bdev_nvme_attach_controller" 00:25:59.615 } 00:25:59.615 EOF 00:25:59.615 )") 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.615 { 00:25:59.615 "params": { 00:25:59.615 "name": "Nvme$subsystem", 00:25:59.615 "trtype": "$TEST_TRANSPORT", 00:25:59.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.615 "adrfam": "ipv4", 00:25:59.615 "trsvcid": "$NVMF_PORT", 00:25:59.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.615 "hdgst": ${hdgst:-false}, 00:25:59.615 "ddgst": ${ddgst:-false} 00:25:59.615 }, 00:25:59.615 "method": "bdev_nvme_attach_controller" 00:25:59.615 } 00:25:59.615 EOF 00:25:59.615 )") 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.615 { 00:25:59.615 "params": { 00:25:59.615 "name": "Nvme$subsystem", 00:25:59.615 "trtype": "$TEST_TRANSPORT", 00:25:59.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.615 "adrfam": "ipv4", 00:25:59.615 "trsvcid": "$NVMF_PORT", 00:25:59.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.615 "hdgst": ${hdgst:-false}, 00:25:59.615 "ddgst": ${ddgst:-false} 00:25:59.615 }, 00:25:59.615 "method": "bdev_nvme_attach_controller" 00:25:59.615 } 00:25:59.615 EOF 00:25:59.615 )") 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.615 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.615 { 00:25:59.615 "params": { 00:25:59.616 "name": "Nvme$subsystem", 00:25:59.616 "trtype": "$TEST_TRANSPORT", 00:25:59.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "$NVMF_PORT", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.616 "hdgst": ${hdgst:-false}, 00:25:59.616 "ddgst": ${ddgst:-false} 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 } 00:25:59.616 EOF 00:25:59.616 )") 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.616 { 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme$subsystem", 00:25:59.616 "trtype": "$TEST_TRANSPORT", 00:25:59.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "$NVMF_PORT", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.616 "hdgst": ${hdgst:-false}, 00:25:59.616 "ddgst": ${ddgst:-false} 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 } 00:25:59.616 EOF 00:25:59.616 )") 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.616 { 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme$subsystem", 00:25:59.616 "trtype": "$TEST_TRANSPORT", 00:25:59.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "$NVMF_PORT", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.616 "hdgst": ${hdgst:-false}, 00:25:59.616 "ddgst": ${ddgst:-false} 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 } 00:25:59.616 EOF 00:25:59.616 )") 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.616 { 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme$subsystem", 00:25:59.616 "trtype": "$TEST_TRANSPORT", 00:25:59.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "$NVMF_PORT", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.616 "hdgst": ${hdgst:-false}, 00:25:59.616 "ddgst": ${ddgst:-false} 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 } 00:25:59.616 EOF 00:25:59.616 )") 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:59.616 { 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme$subsystem", 00:25:59.616 "trtype": "$TEST_TRANSPORT", 00:25:59.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "$NVMF_PORT", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:59.616 "hdgst": ${hdgst:-false}, 00:25:59.616 "ddgst": ${ddgst:-false} 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 } 00:25:59.616 EOF 00:25:59.616 )") 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:25:59.616 18:21:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme1", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme2", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme3", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme4", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme5", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme6", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme7", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme8", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme9", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 },{ 00:25:59.616 "params": { 00:25:59.616 "name": "Nvme10", 00:25:59.616 "trtype": "tcp", 00:25:59.616 "traddr": "10.0.0.2", 00:25:59.616 "adrfam": "ipv4", 00:25:59.616 "trsvcid": "4420", 00:25:59.616 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:59.616 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:59.616 "hdgst": false, 00:25:59.616 "ddgst": false 00:25:59.616 }, 00:25:59.616 "method": "bdev_nvme_attach_controller" 00:25:59.616 }' 00:25:59.616 [2024-11-26 18:21:47.267559] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:25:59.617 [2024-11-26 18:21:47.267662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid656665 ] 00:25:59.617 [2024-11-26 18:21:47.342124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.617 [2024-11-26 18:21:47.405246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.016 Running I/O for 1 seconds... 00:26:02.206 1732.00 IOPS, 108.25 MiB/s 00:26:02.206 Latency(us) 00:26:02.206 [2024-11-26T17:21:50.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.206 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme1n1 : 1.13 226.62 14.16 0.00 0.00 274575.36 18932.62 260978.92 00:26:02.206 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme2n1 : 1.18 216.90 13.56 0.00 0.00 287638.57 21262.79 270299.59 00:26:02.206 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme3n1 : 1.10 232.94 14.56 0.00 0.00 262829.13 17864.63 257872.02 00:26:02.206 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme4n1 : 1.07 238.99 14.94 0.00 0.00 251355.78 24660.95 250104.79 00:26:02.206 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme5n1 : 1.18 216.22 13.51 0.00 0.00 274808.79 20971.52 270299.59 00:26:02.206 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme6n1 : 1.19 215.11 13.44 0.00 0.00 271897.98 23010.42 298261.62 00:26:02.206 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme7n1 : 1.14 224.82 14.05 0.00 0.00 254709.95 19903.53 253211.69 00:26:02.206 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme8n1 : 1.19 268.04 16.75 0.00 0.00 211105.72 15728.64 250104.79 00:26:02.206 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme9n1 : 1.17 218.93 13.68 0.00 0.00 253333.62 19126.80 257872.02 00:26:02.206 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:02.206 Verification LBA range: start 0x0 length 0x400 00:26:02.206 Nvme10n1 : 1.20 267.03 16.69 0.00 0.00 204528.60 12281.93 256318.58 00:26:02.206 [2024-11-26T17:21:50.217Z] =================================================================================================================== 00:26:02.206 [2024-11-26T17:21:50.217Z] Total : 2325.59 145.35 0.00 0.00 252446.86 12281.93 298261.62 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.464 rmmod nvme_tcp 00:26:02.464 rmmod nvme_fabrics 00:26:02.464 rmmod nvme_keyring 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 656062 ']' 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 656062 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 656062 ']' 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 656062 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 656062 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 656062' 00:26:02.464 killing process with pid 656062 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 656062 00:26:02.464 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 656062 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.030 18:21:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.560 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:05.560 00:26:05.560 real 0m12.107s 00:26:05.560 user 0m34.909s 00:26:05.560 sys 0m3.400s 00:26:05.560 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.560 18:21:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:05.560 ************************************ 00:26:05.560 END TEST nvmf_shutdown_tc1 00:26:05.560 ************************************ 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:05.560 ************************************ 00:26:05.560 START TEST nvmf_shutdown_tc2 00:26:05.560 ************************************ 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.560 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:05.561 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:05.561 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:05.561 Found net devices under 0000:09:00.0: cvl_0_0 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:05.561 Found net devices under 0000:09:00.1: cvl_0_1 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:05.561 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:05.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:26:05.562 00:26:05.562 --- 10.0.0.2 ping statistics --- 00:26:05.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.562 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:05.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:26:05.562 00:26:05.562 --- 10.0.0.1 ping statistics --- 00:26:05.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.562 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=657434 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 657434 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 657434 ']' 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.562 [2024-11-26 18:21:53.275371] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:05.562 [2024-11-26 18:21:53.275466] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.562 [2024-11-26 18:21:53.348557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:05.562 [2024-11-26 18:21:53.407387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.562 [2024-11-26 18:21:53.407437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.562 [2024-11-26 18:21:53.407460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.562 [2024-11-26 18:21:53.407471] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.562 [2024-11-26 18:21:53.407481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.562 [2024-11-26 18:21:53.408909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.562 [2024-11-26 18:21:53.408968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.562 [2024-11-26 18:21:53.409034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:05.562 [2024-11-26 18:21:53.409038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.562 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.562 [2024-11-26 18:21:53.564117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:05.820 18:21:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:05.820 Malloc1 00:26:05.820 [2024-11-26 18:21:53.660526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.820 Malloc2 00:26:05.820 Malloc3 00:26:05.820 Malloc4 00:26:05.820 Malloc5 00:26:06.078 Malloc6 00:26:06.078 Malloc7 00:26:06.078 Malloc8 00:26:06.078 Malloc9 00:26:06.078 Malloc10 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=657611 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 657611 /var/tmp/bdevperf.sock 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 657611 ']' 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:06.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.338 { 00:26:06.338 "params": { 00:26:06.338 "name": "Nvme$subsystem", 00:26:06.338 "trtype": "$TEST_TRANSPORT", 00:26:06.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.338 "adrfam": "ipv4", 00:26:06.338 "trsvcid": "$NVMF_PORT", 00:26:06.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.338 "hdgst": ${hdgst:-false}, 00:26:06.338 "ddgst": ${ddgst:-false} 00:26:06.338 }, 00:26:06.338 "method": "bdev_nvme_attach_controller" 00:26:06.338 } 00:26:06.338 EOF 00:26:06.338 )") 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.338 { 00:26:06.338 "params": { 00:26:06.338 "name": "Nvme$subsystem", 00:26:06.338 "trtype": "$TEST_TRANSPORT", 00:26:06.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.338 "adrfam": "ipv4", 00:26:06.338 "trsvcid": "$NVMF_PORT", 00:26:06.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.338 "hdgst": ${hdgst:-false}, 00:26:06.338 "ddgst": ${ddgst:-false} 00:26:06.338 }, 00:26:06.338 "method": "bdev_nvme_attach_controller" 00:26:06.338 } 00:26:06.338 EOF 00:26:06.338 )") 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.338 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.338 { 00:26:06.338 "params": { 00:26:06.338 "name": "Nvme$subsystem", 00:26:06.338 "trtype": "$TEST_TRANSPORT", 00:26:06.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.338 "adrfam": "ipv4", 00:26:06.338 "trsvcid": "$NVMF_PORT", 00:26:06.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.339 "hdgst": ${hdgst:-false}, 00:26:06.339 "ddgst": ${ddgst:-false} 00:26:06.339 }, 00:26:06.339 "method": "bdev_nvme_attach_controller" 00:26:06.339 } 00:26:06.339 EOF 00:26:06.339 )") 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.339 { 00:26:06.339 "params": { 00:26:06.339 "name": "Nvme$subsystem", 00:26:06.339 "trtype": "$TEST_TRANSPORT", 00:26:06.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.339 "adrfam": "ipv4", 00:26:06.339 "trsvcid": "$NVMF_PORT", 00:26:06.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.339 "hdgst": ${hdgst:-false}, 00:26:06.339 "ddgst": ${ddgst:-false} 00:26:06.339 }, 00:26:06.339 "method": "bdev_nvme_attach_controller" 00:26:06.339 } 00:26:06.339 EOF 00:26:06.339 )") 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.339 { 00:26:06.339 "params": { 00:26:06.339 "name": "Nvme$subsystem", 00:26:06.339 "trtype": "$TEST_TRANSPORT", 00:26:06.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.339 "adrfam": "ipv4", 00:26:06.339 "trsvcid": "$NVMF_PORT", 00:26:06.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.339 "hdgst": ${hdgst:-false}, 00:26:06.339 "ddgst": ${ddgst:-false} 00:26:06.339 }, 00:26:06.339 "method": "bdev_nvme_attach_controller" 00:26:06.339 } 00:26:06.339 EOF 00:26:06.339 )") 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.339 { 00:26:06.339 "params": { 00:26:06.339 "name": "Nvme$subsystem", 00:26:06.339 "trtype": "$TEST_TRANSPORT", 00:26:06.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.339 "adrfam": "ipv4", 00:26:06.339 "trsvcid": "$NVMF_PORT", 00:26:06.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.339 "hdgst": ${hdgst:-false}, 00:26:06.339 "ddgst": ${ddgst:-false} 00:26:06.339 }, 00:26:06.339 "method": "bdev_nvme_attach_controller" 00:26:06.339 } 00:26:06.339 EOF 00:26:06.339 )") 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.339 { 00:26:06.339 "params": { 00:26:06.339 "name": "Nvme$subsystem", 00:26:06.339 "trtype": "$TEST_TRANSPORT", 00:26:06.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.339 "adrfam": "ipv4", 00:26:06.339 "trsvcid": "$NVMF_PORT", 00:26:06.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.339 "hdgst": ${hdgst:-false}, 00:26:06.339 "ddgst": ${ddgst:-false} 00:26:06.339 }, 00:26:06.339 "method": "bdev_nvme_attach_controller" 00:26:06.339 } 00:26:06.339 EOF 00:26:06.339 )") 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.339 { 00:26:06.339 "params": { 00:26:06.339 "name": "Nvme$subsystem", 00:26:06.339 "trtype": "$TEST_TRANSPORT", 00:26:06.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.339 "adrfam": "ipv4", 00:26:06.339 "trsvcid": "$NVMF_PORT", 00:26:06.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.339 "hdgst": ${hdgst:-false}, 00:26:06.339 "ddgst": ${ddgst:-false} 00:26:06.339 }, 00:26:06.339 "method": "bdev_nvme_attach_controller" 00:26:06.339 } 00:26:06.339 EOF 00:26:06.339 )") 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.339 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.339 { 00:26:06.339 "params": { 00:26:06.339 "name": "Nvme$subsystem", 00:26:06.339 "trtype": "$TEST_TRANSPORT", 00:26:06.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.339 "adrfam": "ipv4", 00:26:06.339 "trsvcid": "$NVMF_PORT", 00:26:06.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.339 "hdgst": ${hdgst:-false}, 00:26:06.339 "ddgst": ${ddgst:-false} 00:26:06.339 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 } 00:26:06.340 EOF 00:26:06.340 )") 00:26:06.340 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.340 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:06.340 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:06.340 { 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme$subsystem", 00:26:06.340 "trtype": "$TEST_TRANSPORT", 00:26:06.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "$NVMF_PORT", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.340 "hdgst": ${hdgst:-false}, 00:26:06.340 "ddgst": ${ddgst:-false} 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 } 00:26:06.340 EOF 00:26:06.340 )") 00:26:06.340 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:26:06.340 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:26:06.340 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:26:06.340 18:21:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme1", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:06.340 "hdgst": false, 00:26:06.340 "ddgst": false 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 },{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme2", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:06.340 "hdgst": false, 00:26:06.340 "ddgst": false 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 },{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme3", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:06.340 "hdgst": false, 00:26:06.340 "ddgst": false 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 },{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme4", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:06.340 "hdgst": false, 00:26:06.340 "ddgst": false 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 },{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme5", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:06.340 "hdgst": false, 00:26:06.340 "ddgst": false 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 },{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme6", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:06.340 "hdgst": false, 00:26:06.340 "ddgst": false 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 },{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme7", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:06.340 "hdgst": false, 00:26:06.340 "ddgst": false 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 },{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme8", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:06.340 "hdgst": false, 00:26:06.340 "ddgst": false 00:26:06.340 }, 00:26:06.340 "method": "bdev_nvme_attach_controller" 00:26:06.340 },{ 00:26:06.340 "params": { 00:26:06.340 "name": "Nvme9", 00:26:06.340 "trtype": "tcp", 00:26:06.340 "traddr": "10.0.0.2", 00:26:06.340 "adrfam": "ipv4", 00:26:06.340 "trsvcid": "4420", 00:26:06.340 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:06.340 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:06.340 "hdgst": false, 00:26:06.341 "ddgst": false 00:26:06.341 }, 00:26:06.341 "method": "bdev_nvme_attach_controller" 00:26:06.341 },{ 00:26:06.341 "params": { 00:26:06.341 "name": "Nvme10", 00:26:06.341 "trtype": "tcp", 00:26:06.341 "traddr": "10.0.0.2", 00:26:06.341 "adrfam": "ipv4", 00:26:06.341 "trsvcid": "4420", 00:26:06.341 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:06.341 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:06.341 "hdgst": false, 00:26:06.341 "ddgst": false 00:26:06.341 }, 00:26:06.341 "method": "bdev_nvme_attach_controller" 00:26:06.341 }' 00:26:06.341 [2024-11-26 18:21:54.178044] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:06.341 [2024-11-26 18:21:54.178122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657611 ] 00:26:06.341 [2024-11-26 18:21:54.249834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.341 [2024-11-26 18:21:54.309973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.240 Running I/O for 10 seconds... 00:26:08.240 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.240 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:08.240 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:08.240 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.240 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:26:08.530 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=149 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 149 -ge 100 ']' 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 657611 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 657611 ']' 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 657611 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 657611 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 657611' 00:26:08.788 killing process with pid 657611 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 657611 00:26:08.788 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 657611 00:26:08.788 Received shutdown signal, test time was about 0.947773 seconds 00:26:08.788 00:26:08.788 Latency(us) 00:26:08.788 [2024-11-26T17:21:56.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.788 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme1n1 : 0.94 271.04 16.94 0.00 0.00 233345.71 18155.90 254765.13 00:26:08.788 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme2n1 : 0.93 237.56 14.85 0.00 0.00 255543.62 15922.82 246997.90 00:26:08.788 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme3n1 : 0.95 270.35 16.90 0.00 0.00 224727.42 32428.18 240784.12 00:26:08.788 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme4n1 : 0.94 276.42 17.28 0.00 0.00 214541.62 4854.52 256318.58 00:26:08.788 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme5n1 : 0.92 209.77 13.11 0.00 0.00 277063.55 18738.44 260978.92 00:26:08.788 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme6n1 : 0.91 210.73 13.17 0.00 0.00 269418.70 23787.14 259425.47 00:26:08.788 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme7n1 : 0.92 208.03 13.00 0.00 0.00 267558.31 17961.72 264085.81 00:26:08.788 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme8n1 : 0.90 212.80 13.30 0.00 0.00 254524.68 21068.61 246997.90 00:26:08.788 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme9n1 : 0.94 209.05 13.07 0.00 0.00 253181.10 5704.06 260978.92 00:26:08.788 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:08.788 Verification LBA range: start 0x0 length 0x400 00:26:08.788 Nvme10n1 : 0.93 205.45 12.84 0.00 0.00 253474.64 20388.98 281173.71 00:26:08.788 [2024-11-26T17:21:56.799Z] =================================================================================================================== 00:26:08.788 [2024-11-26T17:21:56.799Z] Total : 2311.20 144.45 0.00 0.00 248008.51 4854.52 281173.71 00:26:09.047 18:21:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 657434 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:09.981 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:09.981 rmmod nvme_tcp 00:26:09.981 rmmod nvme_fabrics 00:26:09.981 rmmod nvme_keyring 00:26:10.238 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.238 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:26:10.238 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:26:10.238 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 657434 ']' 00:26:10.238 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 657434 00:26:10.238 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 657434 ']' 00:26:10.238 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 657434 00:26:10.238 18:21:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:26:10.238 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.238 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 657434 00:26:10.238 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:10.238 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:10.238 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 657434' 00:26:10.238 killing process with pid 657434 00:26:10.238 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 657434 00:26:10.238 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 657434 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.805 18:21:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:12.710 00:26:12.710 real 0m7.514s 00:26:12.710 user 0m22.701s 00:26:12.710 sys 0m1.497s 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:12.710 ************************************ 00:26:12.710 END TEST nvmf_shutdown_tc2 00:26:12.710 ************************************ 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:12.710 ************************************ 00:26:12.710 START TEST nvmf_shutdown_tc3 00:26:12.710 ************************************ 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:26:12.710 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:12.711 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:12.711 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:12.711 Found net devices under 0000:09:00.0: cvl_0_0 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.711 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:12.712 Found net devices under 0000:09:00.1: cvl_0_1 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.712 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.969 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.969 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.969 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.969 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.969 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:12.969 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:12.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:26:12.969 00:26:12.969 --- 10.0.0.2 ping statistics --- 00:26:12.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.969 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:26:12.970 00:26:12.970 --- 10.0.0.1 ping statistics --- 00:26:12.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.970 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=658516 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 658516 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 658516 ']' 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.970 18:22:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:12.970 [2024-11-26 18:22:00.866654] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:12.970 [2024-11-26 18:22:00.866745] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.970 [2024-11-26 18:22:00.942793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.227 [2024-11-26 18:22:01.006239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.227 [2024-11-26 18:22:01.006312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.227 [2024-11-26 18:22:01.006328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.227 [2024-11-26 18:22:01.006340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.227 [2024-11-26 18:22:01.006375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.227 [2024-11-26 18:22:01.007893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.227 [2024-11-26 18:22:01.007954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:13.227 [2024-11-26 18:22:01.008019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:13.227 [2024-11-26 18:22:01.008021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.227 [2024-11-26 18:22:01.170763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.227 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.485 Malloc1 00:26:13.485 [2024-11-26 18:22:01.274190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.485 Malloc2 00:26:13.485 Malloc3 00:26:13.485 Malloc4 00:26:13.485 Malloc5 00:26:13.743 Malloc6 00:26:13.743 Malloc7 00:26:13.743 Malloc8 00:26:13.743 Malloc9 00:26:13.743 Malloc10 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=658696 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 658696 /var/tmp/bdevperf.sock 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 658696 ']' 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:13.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:13.743 { 00:26:13.743 "params": { 00:26:13.743 "name": "Nvme$subsystem", 00:26:13.743 "trtype": "$TEST_TRANSPORT", 00:26:13.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:13.743 "adrfam": "ipv4", 00:26:13.743 "trsvcid": "$NVMF_PORT", 00:26:13.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:13.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:13.743 "hdgst": ${hdgst:-false}, 00:26:13.743 "ddgst": ${ddgst:-false} 00:26:13.743 }, 00:26:13.743 "method": "bdev_nvme_attach_controller" 00:26:13.743 } 00:26:13.743 EOF 00:26:13.743 )") 00:26:13.743 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.001 { 00:26:14.001 "params": { 00:26:14.001 "name": "Nvme$subsystem", 00:26:14.001 "trtype": "$TEST_TRANSPORT", 00:26:14.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.001 "adrfam": "ipv4", 00:26:14.001 "trsvcid": "$NVMF_PORT", 00:26:14.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.001 "hdgst": ${hdgst:-false}, 00:26:14.001 "ddgst": ${ddgst:-false} 00:26:14.001 }, 00:26:14.001 "method": "bdev_nvme_attach_controller" 00:26:14.001 } 00:26:14.001 EOF 00:26:14.001 )") 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.001 { 00:26:14.001 "params": { 00:26:14.001 "name": "Nvme$subsystem", 00:26:14.001 "trtype": "$TEST_TRANSPORT", 00:26:14.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.001 "adrfam": "ipv4", 00:26:14.001 "trsvcid": "$NVMF_PORT", 00:26:14.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.001 "hdgst": ${hdgst:-false}, 00:26:14.001 "ddgst": ${ddgst:-false} 00:26:14.001 }, 00:26:14.001 "method": "bdev_nvme_attach_controller" 00:26:14.001 } 00:26:14.001 EOF 00:26:14.001 )") 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.001 { 00:26:14.001 "params": { 00:26:14.001 "name": "Nvme$subsystem", 00:26:14.001 "trtype": "$TEST_TRANSPORT", 00:26:14.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.001 "adrfam": "ipv4", 00:26:14.001 "trsvcid": "$NVMF_PORT", 00:26:14.001 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.001 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.001 "hdgst": ${hdgst:-false}, 00:26:14.001 "ddgst": ${ddgst:-false} 00:26:14.001 }, 00:26:14.001 "method": "bdev_nvme_attach_controller" 00:26:14.001 } 00:26:14.001 EOF 00:26:14.001 )") 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.001 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.001 { 00:26:14.001 "params": { 00:26:14.001 "name": "Nvme$subsystem", 00:26:14.001 "trtype": "$TEST_TRANSPORT", 00:26:14.001 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.001 "adrfam": "ipv4", 00:26:14.001 "trsvcid": "$NVMF_PORT", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.002 "hdgst": ${hdgst:-false}, 00:26:14.002 "ddgst": ${ddgst:-false} 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 } 00:26:14.002 EOF 00:26:14.002 )") 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.002 { 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme$subsystem", 00:26:14.002 "trtype": "$TEST_TRANSPORT", 00:26:14.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "$NVMF_PORT", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.002 "hdgst": ${hdgst:-false}, 00:26:14.002 "ddgst": ${ddgst:-false} 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 } 00:26:14.002 EOF 00:26:14.002 )") 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.002 { 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme$subsystem", 00:26:14.002 "trtype": "$TEST_TRANSPORT", 00:26:14.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "$NVMF_PORT", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.002 "hdgst": ${hdgst:-false}, 00:26:14.002 "ddgst": ${ddgst:-false} 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 } 00:26:14.002 EOF 00:26:14.002 )") 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.002 { 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme$subsystem", 00:26:14.002 "trtype": "$TEST_TRANSPORT", 00:26:14.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "$NVMF_PORT", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.002 "hdgst": ${hdgst:-false}, 00:26:14.002 "ddgst": ${ddgst:-false} 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 } 00:26:14.002 EOF 00:26:14.002 )") 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.002 { 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme$subsystem", 00:26:14.002 "trtype": "$TEST_TRANSPORT", 00:26:14.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "$NVMF_PORT", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.002 "hdgst": ${hdgst:-false}, 00:26:14.002 "ddgst": ${ddgst:-false} 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 } 00:26:14.002 EOF 00:26:14.002 )") 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:14.002 { 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme$subsystem", 00:26:14.002 "trtype": "$TEST_TRANSPORT", 00:26:14.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "$NVMF_PORT", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:14.002 "hdgst": ${hdgst:-false}, 00:26:14.002 "ddgst": ${ddgst:-false} 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 } 00:26:14.002 EOF 00:26:14.002 )") 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:26:14.002 18:22:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme1", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme2", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme3", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme4", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme5", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme6", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme7", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme8", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme9", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:14.002 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:14.002 "hdgst": false, 00:26:14.002 "ddgst": false 00:26:14.002 }, 00:26:14.002 "method": "bdev_nvme_attach_controller" 00:26:14.002 },{ 00:26:14.002 "params": { 00:26:14.002 "name": "Nvme10", 00:26:14.002 "trtype": "tcp", 00:26:14.002 "traddr": "10.0.0.2", 00:26:14.002 "adrfam": "ipv4", 00:26:14.002 "trsvcid": "4420", 00:26:14.002 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:14.003 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:14.003 "hdgst": false, 00:26:14.003 "ddgst": false 00:26:14.003 }, 00:26:14.003 "method": "bdev_nvme_attach_controller" 00:26:14.003 }' 00:26:14.003 [2024-11-26 18:22:01.795678] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:14.003 [2024-11-26 18:22:01.795771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid658696 ] 00:26:14.003 [2024-11-26 18:22:01.868152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.003 [2024-11-26 18:22:01.928792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.900 Running I/O for 10 seconds... 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:26:15.900 18:22:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=73 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 73 -ge 100 ']' 00:26:16.158 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:26:16.416 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:26:16.416 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:26:16.416 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:26:16.416 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:26:16.416 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.416 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:16.416 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=142 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 142 -ge 100 ']' 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 658516 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 658516 ']' 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 658516 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 658516 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 658516' 00:26:16.690 killing process with pid 658516 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 658516 00:26:16.690 18:22:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 658516 00:26:16.690 [2024-11-26 18:22:04.481463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.481770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1156ce0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.482769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.690 [2024-11-26 18:22:04.482812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.690 [2024-11-26 18:22:04.482830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.690 [2024-11-26 18:22:04.482844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.690 [2024-11-26 18:22:04.482858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.690 [2024-11-26 18:22:04.482872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.690 [2024-11-26 18:22:04.482885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.690 [2024-11-26 18:22:04.482898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.690 [2024-11-26 18:22:04.482910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc700 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.482932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.482965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.482980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.482992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.690 [2024-11-26 18:22:04.483491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.483751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a02c0 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.488990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.489171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157680 is same with the state(6) to be set 00:26:16.691 [2024-11-26 18:22:04.490449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.490993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.491244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1157b70 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.692 [2024-11-26 18:22:04.492383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.492968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158040 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.693 [2024-11-26 18:22:04.494705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.494980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.495008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.495020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.495032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1158510 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.496935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.496962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.496977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.496989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.694 [2024-11-26 18:22:04.497552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.497806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c8b60 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.498993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.499409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x139fdd0 is same with the state(6) to be set 00:26:16.695 [2024-11-26 18:22:04.502874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.695 [2024-11-26 18:22:04.502921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.695 [2024-11-26 18:22:04.502954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.695 [2024-11-26 18:22:04.502970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.502985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.502998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251a100 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.503082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bc700 (9): Bad file descriptor 00:26:16.696 [2024-11-26 18:22:04.503140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e7310 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.503326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2529660 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.503518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2516f00 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.503694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bb800 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.503855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.503963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.503976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024110 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.504024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b04f0 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.504191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b06f0 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.504373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:16.696 [2024-11-26 18:22:04.504483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.696 [2024-11-26 18:22:04.504496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc270 is same with the state(6) to be set 00:26:16.696 [2024-11-26 18:22:04.504593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.504982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.504996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.697 [2024-11-26 18:22:04.505799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.697 [2024-11-26 18:22:04.505813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.505829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.505858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.505873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.505888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.505902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.505918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.505932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.505948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.505962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.505978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.505992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.506584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.506598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.698 [2024-11-26 18:22:04.507465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.698 [2024-11-26 18:22:04.507480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.507979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.507994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.508008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.508023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.508036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.508054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.508068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.508084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.508099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.508120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.699 [2024-11-26 18:22:04.521784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.699 [2024-11-26 18:22:04.521800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.521817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.521833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.521848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.521864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.521879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.521909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.521924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.521940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.521954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.521970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.521984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.522442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.522511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:16.700 [2024-11-26 18:22:04.523088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.700 [2024-11-26 18:22:04.523616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.700 [2024-11-26 18:22:04.523632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.523978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.523993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.701 [2024-11-26 18:22:04.524632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.701 [2024-11-26 18:22:04.524648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.524971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.524988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.525002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.525017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.525031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.525047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.525061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.525077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.525091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.525551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251a100 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.525591] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:16.702 [2024-11-26 18:22:04.525616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e7310 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.525641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2529660 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.525665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2516f00 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.525689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bb800 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.525720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2024110 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.525751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b04f0 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.525781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b06f0 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.525808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bc270 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.529682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:16.702 [2024-11-26 18:22:04.529722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:16.702 [2024-11-26 18:22:04.530282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:16.702 [2024-11-26 18:22:04.530457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.702 [2024-11-26 18:22:04.530489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bc700 with addr=10.0.0.2, port=4420 00:26:16.702 [2024-11-26 18:22:04.530507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc700 is same with the state(6) to be set 00:26:16.702 [2024-11-26 18:22:04.530591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.702 [2024-11-26 18:22:04.530616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b06f0 with addr=10.0.0.2, port=4420 00:26:16.702 [2024-11-26 18:22:04.530632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b06f0 is same with the state(6) to be set 00:26:16.702 [2024-11-26 18:22:04.531777] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:16.702 [2024-11-26 18:22:04.531856] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:16.702 [2024-11-26 18:22:04.531926] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:16.702 [2024-11-26 18:22:04.532023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.702 [2024-11-26 18:22:04.532050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e7310 with addr=10.0.0.2, port=4420 00:26:16.702 [2024-11-26 18:22:04.532067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e7310 is same with the state(6) to be set 00:26:16.702 [2024-11-26 18:22:04.532089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bc700 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.532111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b06f0 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.532203] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:16.702 [2024-11-26 18:22:04.532274] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:16.702 [2024-11-26 18:22:04.532355] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:16.702 [2024-11-26 18:22:04.532426] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:26:16.702 [2024-11-26 18:22:04.532517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e7310 (9): Bad file descriptor 00:26:16.702 [2024-11-26 18:22:04.532542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:16.702 [2024-11-26 18:22:04.532556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:16.702 [2024-11-26 18:22:04.532572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:16.702 [2024-11-26 18:22:04.532588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:16.702 [2024-11-26 18:22:04.532605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:16.702 [2024-11-26 18:22:04.532617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:16.702 [2024-11-26 18:22:04.532630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:16.702 [2024-11-26 18:22:04.532642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:16.702 [2024-11-26 18:22:04.532760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:16.702 [2024-11-26 18:22:04.532781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:16.702 [2024-11-26 18:22:04.532794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:16.702 [2024-11-26 18:22:04.532808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:16.702 [2024-11-26 18:22:04.535675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.535705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.535735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.535751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.535769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.535783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.535799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.702 [2024-11-26 18:22:04.535812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.702 [2024-11-26 18:22:04.535828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.535842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.535858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.535871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.535887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.535902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.535918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.535933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.535948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.535962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.535978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.535992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.703 [2024-11-26 18:22:04.536925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.703 [2024-11-26 18:22:04.536939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.536955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.536968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.536983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.536997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.537647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.537662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e58b0 is same with the state(6) to be set 00:26:16.704 [2024-11-26 18:22:04.538959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.538983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.539019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.539049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.539079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.539108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.539137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.539168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.539198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.704 [2024-11-26 18:22:04.539227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.704 [2024-11-26 18:22:04.539243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.539970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.539984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.540000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.540013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.540029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.540042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.540062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.540077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.540093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.540107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.540122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.540136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.540152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.540166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.540181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.705 [2024-11-26 18:22:04.540196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.705 [2024-11-26 18:22:04.540211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.540906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.540921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e6cd0 is same with the state(6) to be set 00:26:16.706 [2024-11-26 18:22:04.542165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.706 [2024-11-26 18:22:04.542566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.706 [2024-11-26 18:22:04.542581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.542967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.542985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.707 [2024-11-26 18:22:04.543646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.707 [2024-11-26 18:22:04.543667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.543978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.543997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.544011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.544027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.544041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.544057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.544071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.544086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.544100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.544116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.544130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.544145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e7c60 is same with the state(6) to be set 00:26:16.708 [2024-11-26 18:22:04.545399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.708 [2024-11-26 18:22:04.545976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.708 [2024-11-26 18:22:04.545991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.709 [2024-11-26 18:22:04.546812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.709 [2024-11-26 18:22:04.546827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.546842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.546856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.546871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.546885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.546901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.546914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.546929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.546944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.546959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.546973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.546988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.547348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.547362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e91f0 is same with the state(6) to be set 00:26:16.710 [2024-11-26 18:22:04.548616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.710 [2024-11-26 18:22:04.548976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.710 [2024-11-26 18:22:04.548992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.549747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.549761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.711 [2024-11-26 18:22:04.557753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.711 [2024-11-26 18:22:04.557766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.557782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.557795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.557812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.557826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.557841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.557855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.557870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.557885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.557900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.557914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.557929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.557943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.557959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.557973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.557988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.558246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.558262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x3407180 is same with the state(6) to be set 00:26:16.712 [2024-11-26 18:22:04.559631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.559975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.559991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.712 [2024-11-26 18:22:04.560281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.712 [2024-11-26 18:22:04.560296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.560979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.560993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.713 [2024-11-26 18:22:04.561467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.713 [2024-11-26 18:22:04.561481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.561496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.561511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.561526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.561539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.561555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.561568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.561584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.561598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.561612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c09a0 is same with the state(6) to be set 00:26:16.714 [2024-11-26 18:22:04.562858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.562881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.562904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.562925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.562942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.562957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.562973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.562987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.714 [2024-11-26 18:22:04.563721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.714 [2024-11-26 18:22:04.563737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.563766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.563796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.563824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.563854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.563883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.563913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.563943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.563973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.563987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.715 [2024-11-26 18:22:04.564774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.715 [2024-11-26 18:22:04.564790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.716 [2024-11-26 18:22:04.564804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.716 [2024-11-26 18:22:04.564818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c1e80 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.566466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.566501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.566522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.566541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.566666] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.566694] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.566716] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.582473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.582558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:26:16.716 task offset: 31488 on job bdev=Nvme1n1 fails 00:26:16.716 00:26:16.716 Latency(us) 00:26:16.716 [2024-11-26T17:22:04.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.716 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme1n1 ended in about 0.97 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme1n1 : 0.97 198.82 12.43 66.27 0.00 238856.91 19612.25 248551.35 00:26:16.716 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme2n1 ended in about 0.97 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme2n1 : 0.97 198.60 12.41 66.20 0.00 234548.72 21942.42 256318.58 00:26:16.716 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme3n1 ended in about 0.98 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme3n1 : 0.98 201.53 12.60 65.47 0.00 228180.78 17282.09 254765.13 00:26:16.716 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme4n1 ended in about 0.98 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme4n1 : 0.98 195.77 12.24 65.26 0.00 228899.84 19806.44 250104.79 00:26:16.716 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme5n1 ended in about 0.98 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme5n1 : 0.98 130.09 8.13 65.04 0.00 300339.71 22427.88 264085.81 00:26:16.716 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme6n1 ended in about 0.99 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme6n1 : 0.99 194.50 12.16 64.83 0.00 221374.58 18835.53 248551.35 00:26:16.716 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme7n1 ended in about 0.97 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme7n1 : 0.97 198.31 12.39 66.10 0.00 212094.10 19126.80 254765.13 00:26:16.716 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme8n1 ended in about 1.00 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme8n1 : 1.00 128.25 8.02 64.12 0.00 286900.53 18155.90 292047.83 00:26:16.716 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme9n1 ended in about 1.00 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme9n1 : 1.00 127.83 7.99 63.91 0.00 282221.48 25049.32 288940.94 00:26:16.716 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:16.716 Job: Nvme10n1 ended in about 1.00 seconds with error 00:26:16.716 Verification LBA range: start 0x0 length 0x400 00:26:16.716 Nvme10n1 : 1.00 127.42 7.96 63.71 0.00 277565.69 20971.52 259425.47 00:26:16.716 [2024-11-26T17:22:04.727Z] =================================================================================================================== 00:26:16.716 [2024-11-26T17:22:04.727Z] Total : 1701.11 106.32 650.93 0.00 247095.12 17282.09 292047.83 00:26:16.716 [2024-11-26 18:22:04.612981] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:16.716 [2024-11-26 18:22:04.613071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.613402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.613438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bc270 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.613459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc270 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.613544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.613570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bb800 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.613586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bb800 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.613671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.613698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b04f0 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.613715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b04f0 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.613799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.613827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2024110 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.613843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024110 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.613890] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.613916] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.613939] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.613970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2024110 (9): Bad file descriptor 00:26:16.716 [2024-11-26 18:22:04.613999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b04f0 (9): Bad file descriptor 00:26:16.716 [2024-11-26 18:22:04.614024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bb800 (9): Bad file descriptor 00:26:16.716 [2024-11-26 18:22:04.614047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bc270 (9): Bad file descriptor 00:26:16.716 1701.11 IOPS, 106.32 MiB/s [2024-11-26T17:22:04.727Z] [2024-11-26 18:22:04.616051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.616079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.616249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.616277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2529660 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.616328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2529660 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.616425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.616451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2516f00 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.616467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2516f00 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.616545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.616571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x251a100 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.616587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251a100 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.616643] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.616668] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.616688] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.616709] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.616729] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:26:16.716 [2024-11-26 18:22:04.616811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:26:16.716 [2024-11-26 18:22:04.616929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.616957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20b06f0 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.616973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b06f0 is same with the state(6) to be set 00:26:16.716 [2024-11-26 18:22:04.617060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.716 [2024-11-26 18:22:04.617086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20bc700 with addr=10.0.0.2, port=4420 00:26:16.716 [2024-11-26 18:22:04.617103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc700 is same with the state(6) to be set 00:26:16.717 [2024-11-26 18:22:04.617122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2529660 (9): Bad file descriptor 00:26:16.717 [2024-11-26 18:22:04.617142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2516f00 (9): Bad file descriptor 00:26:16.717 [2024-11-26 18:22:04.617160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x251a100 (9): Bad file descriptor 00:26:16.717 [2024-11-26 18:22:04.617176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.617220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.617237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.617282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.617296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.617347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.617361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.617400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.617578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.717 [2024-11-26 18:22:04.617605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e7310 with addr=10.0.0.2, port=4420 00:26:16.717 [2024-11-26 18:22:04.617621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e7310 is same with the state(6) to be set 00:26:16.717 [2024-11-26 18:22:04.617640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b06f0 (9): Bad file descriptor 00:26:16.717 [2024-11-26 18:22:04.617660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20bc700 (9): Bad file descriptor 00:26:16.717 [2024-11-26 18:22:04.617676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.617716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.617731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.617769] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.617782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.617820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.617861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e7310 (9): Bad file descriptor 00:26:16.717 [2024-11-26 18:22:04.617905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.617947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.617962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.617975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.617988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.618000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:16.717 [2024-11-26 18:22:04.618041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:26:16.717 [2024-11-26 18:22:04.618060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:26:16.717 [2024-11-26 18:22:04.618074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:26:16.717 [2024-11-26 18:22:04.618086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:26:17.284 18:22:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:26:18.224 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 658696 00:26:18.224 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:26:18.224 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 658696 00:26:18.224 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:18.224 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 658696 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:18.225 rmmod nvme_tcp 00:26:18.225 rmmod nvme_fabrics 00:26:18.225 rmmod nvme_keyring 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 658516 ']' 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 658516 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 658516 ']' 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 658516 00:26:18.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (658516) - No such process 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 658516 is not found' 00:26:18.225 Process with pid 658516 is not found 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.225 18:22:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.765 00:26:20.765 real 0m7.564s 00:26:20.765 user 0m18.679s 00:26:20.765 sys 0m1.520s 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:26:20.765 ************************************ 00:26:20.765 END TEST nvmf_shutdown_tc3 00:26:20.765 ************************************ 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:20.765 ************************************ 00:26:20.765 START TEST nvmf_shutdown_tc4 00:26:20.765 ************************************ 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.765 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:20.766 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:20.766 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:20.766 Found net devices under 0000:09:00.0: cvl_0_0 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:20.766 Found net devices under 0000:09:00.1: cvl_0_1 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:26:20.766 00:26:20.766 --- 10.0.0.2 ping statistics --- 00:26:20.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.766 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:26:20.766 00:26:20.766 --- 10.0.0.1 ping statistics --- 00:26:20.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.766 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=659515 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 659515 00:26:20.766 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 659515 ']' 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:20.767 [2024-11-26 18:22:08.474464] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:20.767 [2024-11-26 18:22:08.474554] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.767 [2024-11-26 18:22:08.552796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.767 [2024-11-26 18:22:08.610594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.767 [2024-11-26 18:22:08.610644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.767 [2024-11-26 18:22:08.610666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.767 [2024-11-26 18:22:08.610677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.767 [2024-11-26 18:22:08.610687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.767 [2024-11-26 18:22:08.612174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.767 [2024-11-26 18:22:08.612235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.767 [2024-11-26 18:22:08.612299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:20.767 [2024-11-26 18:22:08.612311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.767 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:20.767 [2024-11-26 18:22:08.770379] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.025 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.026 18:22:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:21.026 Malloc1 00:26:21.026 [2024-11-26 18:22:08.871035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.026 Malloc2 00:26:21.026 Malloc3 00:26:21.026 Malloc4 00:26:21.290 Malloc5 00:26:21.291 Malloc6 00:26:21.291 Malloc7 00:26:21.291 Malloc8 00:26:21.291 Malloc9 00:26:21.550 Malloc10 00:26:21.550 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.550 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:26:21.551 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:21.551 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:21.551 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=659673 00:26:21.551 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:26:21.551 18:22:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:26:21.551 [2024-11-26 18:22:09.415519] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 659515 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 659515 ']' 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 659515 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659515 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659515' 00:26:26.865 killing process with pid 659515 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 659515 00:26:26.865 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 659515 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 [2024-11-26 18:22:14.405152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 [2024-11-26 18:22:14.405735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a35d0 is same with the state(6) to be set 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 [2024-11-26 18:22:14.405792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a35d0 is same with Write completed with error (sct=0, sc=8) 00:26:26.865 the state(6) to be set 00:26:26.865 [2024-11-26 18:22:14.405810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a35d0 is same with the state(6) to be set 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 [2024-11-26 18:22:14.405824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a35d0 is same with the state(6) to be set 00:26:26.865 starting I/O failed: -6 00:26:26.865 [2024-11-26 18:22:14.405838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a35d0 is same with the state(6) to be set 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 [2024-11-26 18:22:14.405851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a35d0 is same with the state(6) to be set 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 starting I/O failed: -6 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.865 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.406384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.407584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.866 [2024-11-26 18:22:14.407703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5c50 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.407731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5c50 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 [2024-11-26 18:22:14.407745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5c50 is same with the state(6) to be set 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.407758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5c50 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.407786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5c50 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 [2024-11-26 18:22:14.407801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5c50 is same with the state(6) to be set 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.407813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5c50 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.407825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a5c50 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.408376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with Write completed with error (sct=0, sc=8) 00:26:26.866 the state(6) to be set 00:26:26.866 starting I/O failed: -6 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 [2024-11-26 18:22:14.408412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with the state(6) to be set 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.408427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.408439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 [2024-11-26 18:22:14.408452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with the state(6) to be set 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.408465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 [2024-11-26 18:22:14.408478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with starting I/O failed: -6 00:26:26.866 the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 [2024-11-26 18:22:14.408491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with starting I/O failed: -6 00:26:26.866 the state(6) to be set 00:26:26.866 Write completed with error (sct=0, sc=8) 00:26:26.866 [2024-11-26 18:22:14.408520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with the state(6) to be set 00:26:26.866 starting I/O failed: -6 00:26:26.866 [2024-11-26 18:22:14.408533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a6120 is same with the state(6) to be set 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 [2024-11-26 18:22:14.408943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a65f0 is same with the state(6) to be set 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 [2024-11-26 18:22:14.408976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a65f0 is same with the state(6) to be set 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 [2024-11-26 18:22:14.408992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a65f0 is same with the state(6) to be set 00:26:26.867 starting I/O failed: -6 00:26:26.867 [2024-11-26 18:22:14.409005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a65f0 is same with the state(6) to be set 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 [2024-11-26 18:22:14.409017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a65f0 is same with the state(6) to be set 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 [2024-11-26 18:22:14.409594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.867 NVMe io qpair process completion error 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 [2024-11-26 18:22:14.414188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 [2024-11-26 18:22:14.415165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.867 [2024-11-26 18:22:14.415227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with the state(6) to be set 00:26:26.867 [2024-11-26 18:22:14.415261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with the state(6) to be set 00:26:26.867 [2024-11-26 18:22:14.415275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with the state(6) to be set 00:26:26.867 [2024-11-26 18:22:14.415298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with the state(6) to be set 00:26:26.867 [2024-11-26 18:22:14.415321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with the state(6) to be set 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 [2024-11-26 18:22:14.415334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with the state(6) to be set 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 [2024-11-26 18:22:14.415347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with starting I/O failed: -6 00:26:26.867 the state(6) to be set 00:26:26.867 [2024-11-26 18:22:14.415371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with Write completed with error (sct=0, sc=8) 00:26:26.867 the state(6) to be set 00:26:26.867 starting I/O failed: -6 00:26:26.867 [2024-11-26 18:22:14.415385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with the state(6) to be set 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 [2024-11-26 18:22:14.415398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9ba0 is same with the state(6) to be set 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.867 starting I/O failed: -6 00:26:26.867 Write completed with error (sct=0, sc=8) 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.415766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with Write completed with error (sct=0, sc=8) 00:26:26.868 the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.415799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.415815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.415827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.415840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.415852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.415864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.415876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.415888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.415901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.415913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.415926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe9f20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.416243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with starting I/O failed: -6 00:26:26.868 the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.416277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.416298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.416321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with [2024-11-26 18:22:14.416316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQthe state(6) to be set 00:26:26.868 transport error -6 (No such device or address) on qpair id 4 00:26:26.868 [2024-11-26 18:22:14.416336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.416349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.416361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.416373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with starting I/O failed: -6 00:26:26.868 the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.416387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.416399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fea3f0 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.416787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.416818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with starting I/O failed: -6 00:26:26.868 the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.416848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.416862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.416874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with starting I/O failed: -6 00:26:26.868 the state(6) to be set 00:26:26.868 [2024-11-26 18:22:14.416887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with Write completed with error (sct=0, sc=8) 00:26:26.868 the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.416900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.416913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.416925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 [2024-11-26 18:22:14.416937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with the state(6) to be set 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 [2024-11-26 18:22:14.416950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225bb20 is same with the state(6) to be set 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.868 Write completed with error (sct=0, sc=8) 00:26:26.868 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 [2024-11-26 18:22:14.417881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.869 NVMe io qpair process completion error 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 [2024-11-26 18:22:14.422501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22438f0 is same with starting I/O failed: -6 00:26:26.869 the state(6) to be set 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 [2024-11-26 18:22:14.422549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22438f0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22438f0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.869 [2024-11-26 18:22:14.422626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 starting I/O failed: -6 00:26:26.869 [2024-11-26 18:22:14.422743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.422760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243de0 is same with the state(6) to be set 00:26:26.869 starting I/O failed: -6 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 [2024-11-26 18:22:14.423126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with the state(6) to be set 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 [2024-11-26 18:22:14.423160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.423176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with Write completed with error (sct=0, sc=8) 00:26:26.869 the state(6) to be set 00:26:26.869 starting I/O failed: -6 00:26:26.869 [2024-11-26 18:22:14.423190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with the state(6) to be set 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 [2024-11-26 18:22:14.423203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.423215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with the state(6) to be set 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 [2024-11-26 18:22:14.423227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with the state(6) to be set 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 [2024-11-26 18:22:14.423239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with the state(6) to be set 00:26:26.869 [2024-11-26 18:22:14.423251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with the state(6) to be set 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 [2024-11-26 18:22:14.423263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22442d0 is same with starting I/O failed: -6 00:26:26.869 the state(6) to be set 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 starting I/O failed: -6 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.869 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 [2024-11-26 18:22:14.423707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.870 [2024-11-26 18:22:14.423792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with the state(6) to be set 00:26:26.870 [2024-11-26 18:22:14.423830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with the state(6) to be set 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 [2024-11-26 18:22:14.423854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with the state(6) to be set 00:26:26.870 starting I/O failed: -6 00:26:26.870 [2024-11-26 18:22:14.423868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with the state(6) to be set 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 [2024-11-26 18:22:14.423880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with starting I/O failed: -6 00:26:26.870 the state(6) to be set 00:26:26.870 [2024-11-26 18:22:14.423893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with Write completed with error (sct=0, sc=8) 00:26:26.870 the state(6) to be set 00:26:26.870 [2024-11-26 18:22:14.423907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with the state(6) to be set 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 [2024-11-26 18:22:14.423919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with the state(6) to be set 00:26:26.870 starting I/O failed: -6 00:26:26.870 [2024-11-26 18:22:14.423931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2243420 is same with the state(6) to be set 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 [2024-11-26 18:22:14.424918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.870 Write completed with error (sct=0, sc=8) 00:26:26.870 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 [2024-11-26 18:22:14.426591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.871 NVMe io qpair process completion error 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 [2024-11-26 18:22:14.427869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 [2024-11-26 18:22:14.428991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.871 starting I/O failed: -6 00:26:26.871 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 [2024-11-26 18:22:14.430147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 [2024-11-26 18:22:14.432161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.872 NVMe io qpair process completion error 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 Write completed with error (sct=0, sc=8) 00:26:26.872 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 [2024-11-26 18:22:14.433385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 [2024-11-26 18:22:14.434387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.873 starting I/O failed: -6 00:26:26.873 starting I/O failed: -6 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 [2024-11-26 18:22:14.435769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.873 Write completed with error (sct=0, sc=8) 00:26:26.873 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 [2024-11-26 18:22:14.438558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.874 NVMe io qpair process completion error 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 [2024-11-26 18:22:14.439909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.874 starting I/O failed: -6 00:26:26.874 starting I/O failed: -6 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 [2024-11-26 18:22:14.441053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.874 Write completed with error (sct=0, sc=8) 00:26:26.874 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 [2024-11-26 18:22:14.442209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 [2024-11-26 18:22:14.444803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.875 NVMe io qpair process completion error 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 Write completed with error (sct=0, sc=8) 00:26:26.875 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 [2024-11-26 18:22:14.446146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 [2024-11-26 18:22:14.447105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 [2024-11-26 18:22:14.448361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.876 Write completed with error (sct=0, sc=8) 00:26:26.876 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 [2024-11-26 18:22:14.451852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.877 NVMe io qpair process completion error 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 [2024-11-26 18:22:14.453232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 [2024-11-26 18:22:14.454349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.877 starting I/O failed: -6 00:26:26.877 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 [2024-11-26 18:22:14.455493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 [2024-11-26 18:22:14.457834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.878 NVMe io qpair process completion error 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.878 starting I/O failed: -6 00:26:26.878 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 [2024-11-26 18:22:14.458954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 [2024-11-26 18:22:14.459993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 [2024-11-26 18:22:14.461162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.879 starting I/O failed: -6 00:26:26.879 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 [2024-11-26 18:22:14.463300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.880 NVMe io qpair process completion error 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 Write completed with error (sct=0, sc=8) 00:26:26.880 starting I/O failed: -6 00:26:26.881 [2024-11-26 18:22:14.465328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 [2024-11-26 18:22:14.466546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 Write completed with error (sct=0, sc=8) 00:26:26.881 starting I/O failed: -6 00:26:26.881 [2024-11-26 18:22:14.470026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.881 NVMe io qpair process completion error 00:26:26.881 Initializing NVMe Controllers 00:26:26.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:26:26.881 Controller IO queue size 128, less than required. 00:26:26.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:26:26.881 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:26:26.882 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:26:26.882 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:26:26.882 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:26:26.882 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:26:26.882 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:26:26.882 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:26:26.882 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.882 Controller IO queue size 128, less than required. 00:26:26.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:26:26.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:26.882 Initialization complete. Launching workers. 00:26:26.882 ======================================================== 00:26:26.882 Latency(us) 00:26:26.882 Device Information : IOPS MiB/s Average min max 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1817.03 78.08 70463.58 736.61 121298.38 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1800.65 77.37 70347.79 970.49 120694.98 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1775.53 76.29 72071.69 1225.09 122585.62 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1772.69 76.17 71467.30 806.91 122891.84 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1813.76 77.93 69870.56 889.71 117686.21 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1811.57 77.84 69980.63 974.08 116964.53 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1803.49 77.49 70332.72 857.67 131364.09 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1787.77 76.82 70988.35 996.85 135237.28 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1824.90 78.41 69591.62 862.91 117405.98 00:26:26.882 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1798.03 77.26 69826.46 1223.71 117345.51 00:26:26.882 ======================================================== 00:26:26.882 Total : 18005.43 773.67 70488.07 736.61 135237.28 00:26:26.882 00:26:26.882 [2024-11-26 18:22:14.476266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15172c0 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518900 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517c50 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1517920 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516d10 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15175f0 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518ae0 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15169e0 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15166b0 is same with the state(6) to be set 00:26:26.882 [2024-11-26 18:22:14.476870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1518720 is same with the state(6) to be set 00:26:26.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:27.141 18:22:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 659673 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 659673 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 659673 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:26:28.077 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:28.078 rmmod nvme_tcp 00:26:28.078 rmmod nvme_fabrics 00:26:28.078 rmmod nvme_keyring 00:26:28.078 18:22:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 659515 ']' 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 659515 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 659515 ']' 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 659515 00:26:28.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (659515) - No such process 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 659515 is not found' 00:26:28.078 Process with pid 659515 is not found 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.078 18:22:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.616 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:30.616 00:26:30.616 real 0m9.811s 00:26:30.616 user 0m23.930s 00:26:30.616 sys 0m5.572s 00:26:30.616 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:26:30.617 ************************************ 00:26:30.617 END TEST nvmf_shutdown_tc4 00:26:30.617 ************************************ 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:26:30.617 00:26:30.617 real 0m37.406s 00:26:30.617 user 1m40.430s 00:26:30.617 sys 0m12.207s 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:30.617 ************************************ 00:26:30.617 END TEST nvmf_shutdown 00:26:30.617 ************************************ 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:30.617 ************************************ 00:26:30.617 START TEST nvmf_nsid 00:26:30.617 ************************************ 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:30.617 * Looking for test storage... 00:26:30.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:30.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.617 --rc genhtml_branch_coverage=1 00:26:30.617 --rc genhtml_function_coverage=1 00:26:30.617 --rc genhtml_legend=1 00:26:30.617 --rc geninfo_all_blocks=1 00:26:30.617 --rc geninfo_unexecuted_blocks=1 00:26:30.617 00:26:30.617 ' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:30.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.617 --rc genhtml_branch_coverage=1 00:26:30.617 --rc genhtml_function_coverage=1 00:26:30.617 --rc genhtml_legend=1 00:26:30.617 --rc geninfo_all_blocks=1 00:26:30.617 --rc geninfo_unexecuted_blocks=1 00:26:30.617 00:26:30.617 ' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:30.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.617 --rc genhtml_branch_coverage=1 00:26:30.617 --rc genhtml_function_coverage=1 00:26:30.617 --rc genhtml_legend=1 00:26:30.617 --rc geninfo_all_blocks=1 00:26:30.617 --rc geninfo_unexecuted_blocks=1 00:26:30.617 00:26:30.617 ' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:30.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:30.617 --rc genhtml_branch_coverage=1 00:26:30.617 --rc genhtml_function_coverage=1 00:26:30.617 --rc genhtml_legend=1 00:26:30.617 --rc geninfo_all_blocks=1 00:26:30.617 --rc geninfo_unexecuted_blocks=1 00:26:30.617 00:26:30.617 ' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.617 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:30.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:26:30.618 18:22:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:32.520 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:32.520 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:32.520 Found net devices under 0000:09:00.0: cvl_0_0 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:32.520 Found net devices under 0000:09:00.1: cvl_0_1 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.520 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:26:32.521 00:26:32.521 --- 10.0.0.2 ping statistics --- 00:26:32.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.521 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:26:32.521 00:26:32.521 --- 10.0.0.1 ping statistics --- 00:26:32.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.521 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.521 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.780 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:32.780 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.780 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.780 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:32.780 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=662414 00:26:32.780 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:32.781 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 662414 00:26:32.781 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 662414 ']' 00:26:32.781 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.781 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.781 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.781 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.781 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:32.781 [2024-11-26 18:22:20.597073] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:32.781 [2024-11-26 18:22:20.597149] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.781 [2024-11-26 18:22:20.670132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.781 [2024-11-26 18:22:20.728885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.781 [2024-11-26 18:22:20.728938] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.781 [2024-11-26 18:22:20.728952] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.781 [2024-11-26 18:22:20.728962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.781 [2024-11-26 18:22:20.728971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.781 [2024-11-26 18:22:20.729553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=662433 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=7dd4929a-f806-4425-9078-c10c7c51fb00 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=386b1e27-0534-4bd1-9868-f7240754726f 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=75dfb8a7-c8bc-4d35-b846-f6c8349fbcfb 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:33.040 null0 00:26:33.040 null1 00:26:33.040 null2 00:26:33.040 [2024-11-26 18:22:20.918623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.040 [2024-11-26 18:22:20.939636] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:33.040 [2024-11-26 18:22:20.939732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid662433 ] 00:26:33.040 [2024-11-26 18:22:20.942840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 662433 /var/tmp/tgt2.sock 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 662433 ']' 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:33.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:33.040 18:22:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:33.040 [2024-11-26 18:22:21.013215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.299 [2024-11-26 18:22:21.077183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.557 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:33.557 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:33.557 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:33.815 [2024-11-26 18:22:21.726499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.815 [2024-11-26 18:22:21.742691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:33.815 nvme0n1 nvme0n2 00:26:33.815 nvme1n1 00:26:33.815 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:33.815 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:33.815 18:22:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:34.381 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:34.381 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:34.381 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:34.381 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:26:34.382 18:22:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 7dd4929a-f806-4425-9078-c10c7c51fb00 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7dd4929af80644259078c10c7c51fb00 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7DD4929AF80644259078C10C7C51FB00 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7DD4929AF80644259078C10C7C51FB00 == \7\D\D\4\9\2\9\A\F\8\0\6\4\4\2\5\9\0\7\8\C\1\0\C\7\C\5\1\F\B\0\0 ]] 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 386b1e27-0534-4bd1-9868-f7240754726f 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=386b1e2705344bd19868f7240754726f 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 386B1E2705344BD19868F7240754726F 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 386B1E2705344BD19868F7240754726F == \3\8\6\B\1\E\2\7\0\5\3\4\4\B\D\1\9\8\6\8\F\7\2\4\0\7\5\4\7\2\6\F ]] 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 75dfb8a7-c8bc-4d35-b846-f6c8349fbcfb 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=75dfb8a7c8bc4d35b846f6c8349fbcfb 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 75DFB8A7C8BC4D35B846F6C8349FBCFB 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 75DFB8A7C8BC4D35B846F6C8349FBCFB == \7\5\D\F\B\8\A\7\C\8\B\C\4\D\3\5\B\8\4\6\F\6\C\8\3\4\9\F\B\C\F\B ]] 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 662433 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 662433 ']' 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 662433 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 662433 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 662433' 00:26:35.756 killing process with pid 662433 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 662433 00:26:35.756 18:22:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 662433 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:36.321 rmmod nvme_tcp 00:26:36.321 rmmod nvme_fabrics 00:26:36.321 rmmod nvme_keyring 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 662414 ']' 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 662414 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 662414 ']' 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 662414 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 662414 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 662414' 00:26:36.321 killing process with pid 662414 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 662414 00:26:36.321 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 662414 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.580 18:22:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.118 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:39.118 00:26:39.118 real 0m8.432s 00:26:39.118 user 0m8.313s 00:26:39.118 sys 0m2.698s 00:26:39.118 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.118 18:22:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:39.118 ************************************ 00:26:39.118 END TEST nvmf_nsid 00:26:39.118 ************************************ 00:26:39.118 18:22:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:39.118 00:26:39.118 real 11m42.300s 00:26:39.118 user 27m43.568s 00:26:39.118 sys 2m49.384s 00:26:39.118 18:22:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.118 18:22:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:39.118 ************************************ 00:26:39.118 END TEST nvmf_target_extra 00:26:39.118 ************************************ 00:26:39.118 18:22:26 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:39.118 18:22:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.118 18:22:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.118 18:22:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.118 ************************************ 00:26:39.118 START TEST nvmf_host 00:26:39.118 ************************************ 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:39.118 * Looking for test storage... 00:26:39.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:39.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.118 --rc genhtml_branch_coverage=1 00:26:39.118 --rc genhtml_function_coverage=1 00:26:39.118 --rc genhtml_legend=1 00:26:39.118 --rc geninfo_all_blocks=1 00:26:39.118 --rc geninfo_unexecuted_blocks=1 00:26:39.118 00:26:39.118 ' 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:39.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.118 --rc genhtml_branch_coverage=1 00:26:39.118 --rc genhtml_function_coverage=1 00:26:39.118 --rc genhtml_legend=1 00:26:39.118 --rc geninfo_all_blocks=1 00:26:39.118 --rc geninfo_unexecuted_blocks=1 00:26:39.118 00:26:39.118 ' 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:39.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.118 --rc genhtml_branch_coverage=1 00:26:39.118 --rc genhtml_function_coverage=1 00:26:39.118 --rc genhtml_legend=1 00:26:39.118 --rc geninfo_all_blocks=1 00:26:39.118 --rc geninfo_unexecuted_blocks=1 00:26:39.118 00:26:39.118 ' 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:39.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.118 --rc genhtml_branch_coverage=1 00:26:39.118 --rc genhtml_function_coverage=1 00:26:39.118 --rc genhtml_legend=1 00:26:39.118 --rc geninfo_all_blocks=1 00:26:39.118 --rc geninfo_unexecuted_blocks=1 00:26:39.118 00:26:39.118 ' 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.118 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.119 ************************************ 00:26:39.119 START TEST nvmf_multicontroller 00:26:39.119 ************************************ 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:39.119 * Looking for test storage... 00:26:39.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:39.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.119 --rc genhtml_branch_coverage=1 00:26:39.119 --rc genhtml_function_coverage=1 00:26:39.119 --rc genhtml_legend=1 00:26:39.119 --rc geninfo_all_blocks=1 00:26:39.119 --rc geninfo_unexecuted_blocks=1 00:26:39.119 00:26:39.119 ' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:39.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.119 --rc genhtml_branch_coverage=1 00:26:39.119 --rc genhtml_function_coverage=1 00:26:39.119 --rc genhtml_legend=1 00:26:39.119 --rc geninfo_all_blocks=1 00:26:39.119 --rc geninfo_unexecuted_blocks=1 00:26:39.119 00:26:39.119 ' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:39.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.119 --rc genhtml_branch_coverage=1 00:26:39.119 --rc genhtml_function_coverage=1 00:26:39.119 --rc genhtml_legend=1 00:26:39.119 --rc geninfo_all_blocks=1 00:26:39.119 --rc geninfo_unexecuted_blocks=1 00:26:39.119 00:26:39.119 ' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:39.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.119 --rc genhtml_branch_coverage=1 00:26:39.119 --rc genhtml_function_coverage=1 00:26:39.119 --rc genhtml_legend=1 00:26:39.119 --rc geninfo_all_blocks=1 00:26:39.119 --rc geninfo_unexecuted_blocks=1 00:26:39.119 00:26:39.119 ' 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.119 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:39.120 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:39.120 18:22:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:26:39.120 18:22:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:26:41.691 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:41.692 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:41.692 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:41.692 Found net devices under 0000:09:00.0: cvl_0_0 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:41.692 Found net devices under 0000:09:00.1: cvl_0_1 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:41.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:26:41.692 00:26:41.692 --- 10.0.0.2 ping statistics --- 00:26:41.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.692 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:26:41.692 00:26:41.692 --- 10.0.0.1 ping statistics --- 00:26:41.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.692 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.692 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=664990 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 664990 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 664990 ']' 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.693 [2024-11-26 18:22:29.443938] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:41.693 [2024-11-26 18:22:29.444011] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.693 [2024-11-26 18:22:29.515109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:41.693 [2024-11-26 18:22:29.575910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.693 [2024-11-26 18:22:29.575958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.693 [2024-11-26 18:22:29.575982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.693 [2024-11-26 18:22:29.575992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.693 [2024-11-26 18:22:29.576002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.693 [2024-11-26 18:22:29.577507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.693 [2024-11-26 18:22:29.577577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.693 [2024-11-26 18:22:29.577573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:41.693 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 [2024-11-26 18:22:29.716975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 Malloc0 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 [2024-11-26 18:22:29.774208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 [2024-11-26 18:22:29.782106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 Malloc1 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=665028 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 665028 /var/tmp/bdevperf.sock 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 665028 ']' 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:41.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.951 18:22:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.210 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.210 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:42.210 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:42.210 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.210 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.468 NVMe0n1 00:26:42.468 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.468 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:42.468 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:42.468 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.468 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.468 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.468 1 00:26:42.468 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:42.468 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.469 request: 00:26:42.469 { 00:26:42.469 "name": "NVMe0", 00:26:42.469 "trtype": "tcp", 00:26:42.469 "traddr": "10.0.0.2", 00:26:42.469 "adrfam": "ipv4", 00:26:42.469 "trsvcid": "4420", 00:26:42.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.469 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:42.469 "hostaddr": "10.0.0.1", 00:26:42.469 "prchk_reftag": false, 00:26:42.469 "prchk_guard": false, 00:26:42.469 "hdgst": false, 00:26:42.469 "ddgst": false, 00:26:42.469 "allow_unrecognized_csi": false, 00:26:42.469 "method": "bdev_nvme_attach_controller", 00:26:42.469 "req_id": 1 00:26:42.469 } 00:26:42.469 Got JSON-RPC error response 00:26:42.469 response: 00:26:42.469 { 00:26:42.469 "code": -114, 00:26:42.469 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:42.469 } 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.469 request: 00:26:42.469 { 00:26:42.469 "name": "NVMe0", 00:26:42.469 "trtype": "tcp", 00:26:42.469 "traddr": "10.0.0.2", 00:26:42.469 "adrfam": "ipv4", 00:26:42.469 "trsvcid": "4420", 00:26:42.469 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:42.469 "hostaddr": "10.0.0.1", 00:26:42.469 "prchk_reftag": false, 00:26:42.469 "prchk_guard": false, 00:26:42.469 "hdgst": false, 00:26:42.469 "ddgst": false, 00:26:42.469 "allow_unrecognized_csi": false, 00:26:42.469 "method": "bdev_nvme_attach_controller", 00:26:42.469 "req_id": 1 00:26:42.469 } 00:26:42.469 Got JSON-RPC error response 00:26:42.469 response: 00:26:42.469 { 00:26:42.469 "code": -114, 00:26:42.469 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:42.469 } 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.469 request: 00:26:42.469 { 00:26:42.469 "name": "NVMe0", 00:26:42.469 "trtype": "tcp", 00:26:42.469 "traddr": "10.0.0.2", 00:26:42.469 "adrfam": "ipv4", 00:26:42.469 "trsvcid": "4420", 00:26:42.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.469 "hostaddr": "10.0.0.1", 00:26:42.469 "prchk_reftag": false, 00:26:42.469 "prchk_guard": false, 00:26:42.469 "hdgst": false, 00:26:42.469 "ddgst": false, 00:26:42.469 "multipath": "disable", 00:26:42.469 "allow_unrecognized_csi": false, 00:26:42.469 "method": "bdev_nvme_attach_controller", 00:26:42.469 "req_id": 1 00:26:42.469 } 00:26:42.469 Got JSON-RPC error response 00:26:42.469 response: 00:26:42.469 { 00:26:42.469 "code": -114, 00:26:42.469 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:26:42.469 } 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.469 request: 00:26:42.469 { 00:26:42.469 "name": "NVMe0", 00:26:42.469 "trtype": "tcp", 00:26:42.469 "traddr": "10.0.0.2", 00:26:42.469 "adrfam": "ipv4", 00:26:42.469 "trsvcid": "4420", 00:26:42.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:42.469 "hostaddr": "10.0.0.1", 00:26:42.469 "prchk_reftag": false, 00:26:42.469 "prchk_guard": false, 00:26:42.469 "hdgst": false, 00:26:42.469 "ddgst": false, 00:26:42.469 "multipath": "failover", 00:26:42.469 "allow_unrecognized_csi": false, 00:26:42.469 "method": "bdev_nvme_attach_controller", 00:26:42.469 "req_id": 1 00:26:42.469 } 00:26:42.469 Got JSON-RPC error response 00:26:42.469 response: 00:26:42.469 { 00:26:42.469 "code": -114, 00:26:42.469 "message": "A controller named NVMe0 already exists with the specified network path" 00:26:42.469 } 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:42.469 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.470 NVMe0n1 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.470 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.727 00:26:42.728 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.728 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:42.728 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:42.728 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.728 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:42.728 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.728 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:42.728 18:22:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:43.662 { 00:26:43.662 "results": [ 00:26:43.662 { 00:26:43.662 "job": "NVMe0n1", 00:26:43.662 "core_mask": "0x1", 00:26:43.662 "workload": "write", 00:26:43.662 "status": "finished", 00:26:43.662 "queue_depth": 128, 00:26:43.662 "io_size": 4096, 00:26:43.662 "runtime": 1.003392, 00:26:43.662 "iops": 18422.5108432198, 00:26:43.662 "mibps": 71.96293298132734, 00:26:43.662 "io_failed": 0, 00:26:43.662 "io_timeout": 0, 00:26:43.662 "avg_latency_us": 6937.4434439936285, 00:26:43.662 "min_latency_us": 2220.9422222222224, 00:26:43.662 "max_latency_us": 12233.386666666667 00:26:43.662 } 00:26:43.662 ], 00:26:43.662 "core_count": 1 00:26:43.662 } 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 665028 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 665028 ']' 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 665028 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.662 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 665028 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 665028' 00:26:43.921 killing process with pid 665028 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 665028 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 665028 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:26:43.921 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:26:44.180 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:44.180 [2024-11-26 18:22:29.883129] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:44.180 [2024-11-26 18:22:29.883237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665028 ] 00:26:44.180 [2024-11-26 18:22:29.952177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.180 [2024-11-26 18:22:30.014075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.180 [2024-11-26 18:22:30.491755] bdev.c:4906:bdev_name_add: *ERROR*: Bdev name 8535fea2-c4f8-4cce-905a-6c65bdca10b4 already exists 00:26:44.180 [2024-11-26 18:22:30.491809] bdev.c:8106:bdev_register: *ERROR*: Unable to add uuid:8535fea2-c4f8-4cce-905a-6c65bdca10b4 alias for bdev NVMe1n1 00:26:44.180 [2024-11-26 18:22:30.491824] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:44.180 Running I/O for 1 seconds... 00:26:44.180 18357.00 IOPS, 71.71 MiB/s 00:26:44.180 Latency(us) 00:26:44.180 [2024-11-26T17:22:32.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.180 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:44.180 NVMe0n1 : 1.00 18422.51 71.96 0.00 0.00 6937.44 2220.94 12233.39 00:26:44.180 [2024-11-26T17:22:32.191Z] =================================================================================================================== 00:26:44.180 [2024-11-26T17:22:32.191Z] Total : 18422.51 71.96 0.00 0.00 6937.44 2220.94 12233.39 00:26:44.180 Received shutdown signal, test time was about 1.000000 seconds 00:26:44.180 00:26:44.180 Latency(us) 00:26:44.180 [2024-11-26T17:22:32.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.180 [2024-11-26T17:22:32.191Z] =================================================================================================================== 00:26:44.180 [2024-11-26T17:22:32.191Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.180 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:44.180 rmmod nvme_tcp 00:26:44.180 rmmod nvme_fabrics 00:26:44.180 rmmod nvme_keyring 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 664990 ']' 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 664990 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 664990 ']' 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 664990 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.180 18:22:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 664990 00:26:44.180 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:44.180 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:44.180 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 664990' 00:26:44.180 killing process with pid 664990 00:26:44.180 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 664990 00:26:44.180 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 664990 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.438 18:22:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.970 18:22:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:46.970 00:26:46.970 real 0m7.540s 00:26:46.971 user 0m11.197s 00:26:46.971 sys 0m2.462s 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:46.971 ************************************ 00:26:46.971 END TEST nvmf_multicontroller 00:26:46.971 ************************************ 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.971 ************************************ 00:26:46.971 START TEST nvmf_aer 00:26:46.971 ************************************ 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:46.971 * Looking for test storage... 00:26:46.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:46.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.971 --rc genhtml_branch_coverage=1 00:26:46.971 --rc genhtml_function_coverage=1 00:26:46.971 --rc genhtml_legend=1 00:26:46.971 --rc geninfo_all_blocks=1 00:26:46.971 --rc geninfo_unexecuted_blocks=1 00:26:46.971 00:26:46.971 ' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:46.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.971 --rc genhtml_branch_coverage=1 00:26:46.971 --rc genhtml_function_coverage=1 00:26:46.971 --rc genhtml_legend=1 00:26:46.971 --rc geninfo_all_blocks=1 00:26:46.971 --rc geninfo_unexecuted_blocks=1 00:26:46.971 00:26:46.971 ' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:46.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.971 --rc genhtml_branch_coverage=1 00:26:46.971 --rc genhtml_function_coverage=1 00:26:46.971 --rc genhtml_legend=1 00:26:46.971 --rc geninfo_all_blocks=1 00:26:46.971 --rc geninfo_unexecuted_blocks=1 00:26:46.971 00:26:46.971 ' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:46.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.971 --rc genhtml_branch_coverage=1 00:26:46.971 --rc genhtml_function_coverage=1 00:26:46.971 --rc genhtml_legend=1 00:26:46.971 --rc geninfo_all_blocks=1 00:26:46.971 --rc geninfo_unexecuted_blocks=1 00:26:46.971 00:26:46.971 ' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:46.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:46.971 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:26:46.972 18:22:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.865 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:48.866 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:48.866 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:48.866 Found net devices under 0000:09:00.0: cvl_0_0 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:48.866 Found net devices under 0000:09:00.1: cvl_0_1 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:48.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:26:48.866 00:26:48.866 --- 10.0.0.2 ping statistics --- 00:26:48.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.866 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:26:48.866 00:26:48.866 --- 10.0.0.1 ping statistics --- 00:26:48.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.866 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:48.866 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=667243 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 667243 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 667243 ']' 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.867 18:22:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:48.867 [2024-11-26 18:22:36.874060] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:48.867 [2024-11-26 18:22:36.874142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.125 [2024-11-26 18:22:36.947023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.125 [2024-11-26 18:22:37.007615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.125 [2024-11-26 18:22:37.007683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.125 [2024-11-26 18:22:37.007696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.125 [2024-11-26 18:22:37.007707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.125 [2024-11-26 18:22:37.007731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.125 [2024-11-26 18:22:37.009259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.125 [2024-11-26 18:22:37.009392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.125 [2024-11-26 18:22:37.009524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.125 [2024-11-26 18:22:37.009528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.125 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.125 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:49.125 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:49.125 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:49.125 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.383 [2024-11-26 18:22:37.161894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.383 Malloc0 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.383 [2024-11-26 18:22:37.223939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.383 [ 00:26:49.383 { 00:26:49.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:49.383 "subtype": "Discovery", 00:26:49.383 "listen_addresses": [], 00:26:49.383 "allow_any_host": true, 00:26:49.383 "hosts": [] 00:26:49.383 }, 00:26:49.383 { 00:26:49.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.383 "subtype": "NVMe", 00:26:49.383 "listen_addresses": [ 00:26:49.383 { 00:26:49.383 "trtype": "TCP", 00:26:49.383 "adrfam": "IPv4", 00:26:49.383 "traddr": "10.0.0.2", 00:26:49.383 "trsvcid": "4420" 00:26:49.383 } 00:26:49.383 ], 00:26:49.383 "allow_any_host": true, 00:26:49.383 "hosts": [], 00:26:49.383 "serial_number": "SPDK00000000000001", 00:26:49.383 "model_number": "SPDK bdev Controller", 00:26:49.383 "max_namespaces": 2, 00:26:49.383 "min_cntlid": 1, 00:26:49.383 "max_cntlid": 65519, 00:26:49.383 "namespaces": [ 00:26:49.383 { 00:26:49.383 "nsid": 1, 00:26:49.383 "bdev_name": "Malloc0", 00:26:49.383 "name": "Malloc0", 00:26:49.383 "nguid": "58AE8A70B1E64B44BBE3E84D2EDE5FAA", 00:26:49.383 "uuid": "58ae8a70-b1e6-4b44-bbe3-e84d2ede5faa" 00:26:49.383 } 00:26:49.383 ] 00:26:49.383 } 00:26:49.383 ] 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=667382 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:49.383 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.640 Malloc1 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.640 [ 00:26:49.640 { 00:26:49.640 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:49.640 "subtype": "Discovery", 00:26:49.640 "listen_addresses": [], 00:26:49.640 "allow_any_host": true, 00:26:49.640 "hosts": [] 00:26:49.640 }, 00:26:49.640 { 00:26:49.640 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.640 "subtype": "NVMe", 00:26:49.640 "listen_addresses": [ 00:26:49.640 { 00:26:49.640 "trtype": "TCP", 00:26:49.640 "adrfam": "IPv4", 00:26:49.640 "traddr": "10.0.0.2", 00:26:49.640 "trsvcid": "4420" 00:26:49.640 } 00:26:49.640 ], 00:26:49.640 "allow_any_host": true, 00:26:49.640 "hosts": [], 00:26:49.640 "serial_number": "SPDK00000000000001", 00:26:49.640 "model_number": "SPDK bdev Controller", 00:26:49.640 "max_namespaces": 2, 00:26:49.640 "min_cntlid": 1, 00:26:49.640 "max_cntlid": 65519, 00:26:49.640 "namespaces": [ 00:26:49.640 { 00:26:49.640 "nsid": 1, 00:26:49.640 "bdev_name": "Malloc0", 00:26:49.640 "name": "Malloc0", 00:26:49.640 "nguid": "58AE8A70B1E64B44BBE3E84D2EDE5FAA", 00:26:49.640 "uuid": "58ae8a70-b1e6-4b44-bbe3-e84d2ede5faa" 00:26:49.640 }, 00:26:49.640 { 00:26:49.640 "nsid": 2, 00:26:49.640 "bdev_name": "Malloc1", 00:26:49.640 "name": "Malloc1", 00:26:49.640 "nguid": "379FFA82696B4D709270D96A9603085B", 00:26:49.640 "uuid": "379ffa82-696b-4d70-9270-d96a9603085b" 00:26:49.640 } 00:26:49.640 ] 00:26:49.640 } 00:26:49.640 ] 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 667382 00:26:49.640 Asynchronous Event Request test 00:26:49.640 Attaching to 10.0.0.2 00:26:49.640 Attached to 10.0.0.2 00:26:49.640 Registering asynchronous event callbacks... 00:26:49.640 Starting namespace attribute notice tests for all controllers... 00:26:49.640 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:49.640 aer_cb - Changed Namespace 00:26:49.640 Cleaning up... 00:26:49.640 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:49.641 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.641 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.898 rmmod nvme_tcp 00:26:49.898 rmmod nvme_fabrics 00:26:49.898 rmmod nvme_keyring 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 667243 ']' 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 667243 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 667243 ']' 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 667243 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 667243 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 667243' 00:26:49.898 killing process with pid 667243 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 667243 00:26:49.898 18:22:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 667243 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.156 18:22:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:52.696 00:26:52.696 real 0m5.663s 00:26:52.696 user 0m4.856s 00:26:52.696 sys 0m2.058s 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:52.696 ************************************ 00:26:52.696 END TEST nvmf_aer 00:26:52.696 ************************************ 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.696 ************************************ 00:26:52.696 START TEST nvmf_async_init 00:26:52.696 ************************************ 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:52.696 * Looking for test storage... 00:26:52.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:52.696 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:52.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.697 --rc genhtml_branch_coverage=1 00:26:52.697 --rc genhtml_function_coverage=1 00:26:52.697 --rc genhtml_legend=1 00:26:52.697 --rc geninfo_all_blocks=1 00:26:52.697 --rc geninfo_unexecuted_blocks=1 00:26:52.697 00:26:52.697 ' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:52.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.697 --rc genhtml_branch_coverage=1 00:26:52.697 --rc genhtml_function_coverage=1 00:26:52.697 --rc genhtml_legend=1 00:26:52.697 --rc geninfo_all_blocks=1 00:26:52.697 --rc geninfo_unexecuted_blocks=1 00:26:52.697 00:26:52.697 ' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:52.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.697 --rc genhtml_branch_coverage=1 00:26:52.697 --rc genhtml_function_coverage=1 00:26:52.697 --rc genhtml_legend=1 00:26:52.697 --rc geninfo_all_blocks=1 00:26:52.697 --rc geninfo_unexecuted_blocks=1 00:26:52.697 00:26:52.697 ' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:52.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.697 --rc genhtml_branch_coverage=1 00:26:52.697 --rc genhtml_function_coverage=1 00:26:52.697 --rc genhtml_legend=1 00:26:52.697 --rc geninfo_all_blocks=1 00:26:52.697 --rc geninfo_unexecuted_blocks=1 00:26:52.697 00:26:52.697 ' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:52.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=466616b3469f4e7f9f4fac6fc8147202 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:26:52.697 18:22:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.599 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:54.600 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:54.600 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:54.600 Found net devices under 0000:09:00.0: cvl_0_0 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:54.600 Found net devices under 0000:09:00.1: cvl_0_1 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:26:54.600 00:26:54.600 --- 10.0.0.2 ping statistics --- 00:26:54.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.600 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:26:54.600 00:26:54.600 --- 10.0.0.1 ping statistics --- 00:26:54.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.600 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=669333 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 669333 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 669333 ']' 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:54.600 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:54.600 [2024-11-26 18:22:42.577328] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:26:54.600 [2024-11-26 18:22:42.577423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.859 [2024-11-26 18:22:42.653063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.859 [2024-11-26 18:22:42.710347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.859 [2024-11-26 18:22:42.710399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.859 [2024-11-26 18:22:42.710427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.859 [2024-11-26 18:22:42.710438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.859 [2024-11-26 18:22:42.710448] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.859 [2024-11-26 18:22:42.711007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:54.859 [2024-11-26 18:22:42.856931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.859 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.117 null0 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 466616b3469f4e7f9f4fac6fc8147202 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.117 [2024-11-26 18:22:42.897205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.117 18:22:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.376 nvme0n1 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.376 [ 00:26:55.376 { 00:26:55.376 "name": "nvme0n1", 00:26:55.376 "aliases": [ 00:26:55.376 "466616b3-469f-4e7f-9f4f-ac6fc8147202" 00:26:55.376 ], 00:26:55.376 "product_name": "NVMe disk", 00:26:55.376 "block_size": 512, 00:26:55.376 "num_blocks": 2097152, 00:26:55.376 "uuid": "466616b3-469f-4e7f-9f4f-ac6fc8147202", 00:26:55.376 "numa_id": 0, 00:26:55.376 "assigned_rate_limits": { 00:26:55.376 "rw_ios_per_sec": 0, 00:26:55.376 "rw_mbytes_per_sec": 0, 00:26:55.376 "r_mbytes_per_sec": 0, 00:26:55.376 "w_mbytes_per_sec": 0 00:26:55.376 }, 00:26:55.376 "claimed": false, 00:26:55.376 "zoned": false, 00:26:55.376 "supported_io_types": { 00:26:55.376 "read": true, 00:26:55.376 "write": true, 00:26:55.376 "unmap": false, 00:26:55.376 "flush": true, 00:26:55.376 "reset": true, 00:26:55.376 "nvme_admin": true, 00:26:55.376 "nvme_io": true, 00:26:55.376 "nvme_io_md": false, 00:26:55.376 "write_zeroes": true, 00:26:55.376 "zcopy": false, 00:26:55.376 "get_zone_info": false, 00:26:55.376 "zone_management": false, 00:26:55.376 "zone_append": false, 00:26:55.376 "compare": true, 00:26:55.376 "compare_and_write": true, 00:26:55.376 "abort": true, 00:26:55.376 "seek_hole": false, 00:26:55.376 "seek_data": false, 00:26:55.376 "copy": true, 00:26:55.376 "nvme_iov_md": false 00:26:55.376 }, 00:26:55.376 "memory_domains": [ 00:26:55.376 { 00:26:55.376 "dma_device_id": "system", 00:26:55.376 "dma_device_type": 1 00:26:55.376 } 00:26:55.376 ], 00:26:55.376 "driver_specific": { 00:26:55.376 "nvme": [ 00:26:55.376 { 00:26:55.376 "trid": { 00:26:55.376 "trtype": "TCP", 00:26:55.376 "adrfam": "IPv4", 00:26:55.376 "traddr": "10.0.0.2", 00:26:55.376 "trsvcid": "4420", 00:26:55.376 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:55.376 }, 00:26:55.376 "ctrlr_data": { 00:26:55.376 "cntlid": 1, 00:26:55.376 "vendor_id": "0x8086", 00:26:55.376 "model_number": "SPDK bdev Controller", 00:26:55.376 "serial_number": "00000000000000000000", 00:26:55.376 "firmware_revision": "25.01", 00:26:55.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.376 "oacs": { 00:26:55.376 "security": 0, 00:26:55.376 "format": 0, 00:26:55.376 "firmware": 0, 00:26:55.376 "ns_manage": 0 00:26:55.376 }, 00:26:55.376 "multi_ctrlr": true, 00:26:55.376 "ana_reporting": false 00:26:55.376 }, 00:26:55.376 "vs": { 00:26:55.376 "nvme_version": "1.3" 00:26:55.376 }, 00:26:55.376 "ns_data": { 00:26:55.376 "id": 1, 00:26:55.376 "can_share": true 00:26:55.376 } 00:26:55.376 } 00:26:55.376 ], 00:26:55.376 "mp_policy": "active_passive" 00:26:55.376 } 00:26:55.376 } 00:26:55.376 ] 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.376 [2024-11-26 18:22:43.145897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:55.376 [2024-11-26 18:22:43.145984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1382710 (9): Bad file descriptor 00:26:55.376 [2024-11-26 18:22:43.278443] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.376 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.376 [ 00:26:55.376 { 00:26:55.376 "name": "nvme0n1", 00:26:55.376 "aliases": [ 00:26:55.376 "466616b3-469f-4e7f-9f4f-ac6fc8147202" 00:26:55.376 ], 00:26:55.376 "product_name": "NVMe disk", 00:26:55.376 "block_size": 512, 00:26:55.376 "num_blocks": 2097152, 00:26:55.376 "uuid": "466616b3-469f-4e7f-9f4f-ac6fc8147202", 00:26:55.376 "numa_id": 0, 00:26:55.376 "assigned_rate_limits": { 00:26:55.376 "rw_ios_per_sec": 0, 00:26:55.376 "rw_mbytes_per_sec": 0, 00:26:55.376 "r_mbytes_per_sec": 0, 00:26:55.376 "w_mbytes_per_sec": 0 00:26:55.376 }, 00:26:55.376 "claimed": false, 00:26:55.376 "zoned": false, 00:26:55.376 "supported_io_types": { 00:26:55.376 "read": true, 00:26:55.376 "write": true, 00:26:55.376 "unmap": false, 00:26:55.376 "flush": true, 00:26:55.376 "reset": true, 00:26:55.376 "nvme_admin": true, 00:26:55.376 "nvme_io": true, 00:26:55.376 "nvme_io_md": false, 00:26:55.376 "write_zeroes": true, 00:26:55.376 "zcopy": false, 00:26:55.376 "get_zone_info": false, 00:26:55.376 "zone_management": false, 00:26:55.376 "zone_append": false, 00:26:55.376 "compare": true, 00:26:55.376 "compare_and_write": true, 00:26:55.376 "abort": true, 00:26:55.376 "seek_hole": false, 00:26:55.376 "seek_data": false, 00:26:55.376 "copy": true, 00:26:55.376 "nvme_iov_md": false 00:26:55.376 }, 00:26:55.376 "memory_domains": [ 00:26:55.376 { 00:26:55.376 "dma_device_id": "system", 00:26:55.376 "dma_device_type": 1 00:26:55.376 } 00:26:55.376 ], 00:26:55.376 "driver_specific": { 00:26:55.376 "nvme": [ 00:26:55.376 { 00:26:55.376 "trid": { 00:26:55.376 "trtype": "TCP", 00:26:55.376 "adrfam": "IPv4", 00:26:55.376 "traddr": "10.0.0.2", 00:26:55.377 "trsvcid": "4420", 00:26:55.377 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:55.377 }, 00:26:55.377 "ctrlr_data": { 00:26:55.377 "cntlid": 2, 00:26:55.377 "vendor_id": "0x8086", 00:26:55.377 "model_number": "SPDK bdev Controller", 00:26:55.377 "serial_number": "00000000000000000000", 00:26:55.377 "firmware_revision": "25.01", 00:26:55.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.377 "oacs": { 00:26:55.377 "security": 0, 00:26:55.377 "format": 0, 00:26:55.377 "firmware": 0, 00:26:55.377 "ns_manage": 0 00:26:55.377 }, 00:26:55.377 "multi_ctrlr": true, 00:26:55.377 "ana_reporting": false 00:26:55.377 }, 00:26:55.377 "vs": { 00:26:55.377 "nvme_version": "1.3" 00:26:55.377 }, 00:26:55.377 "ns_data": { 00:26:55.377 "id": 1, 00:26:55.377 "can_share": true 00:26:55.377 } 00:26:55.377 } 00:26:55.377 ], 00:26:55.377 "mp_policy": "active_passive" 00:26:55.377 } 00:26:55.377 } 00:26:55.377 ] 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.cPwC2sAXk2 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.cPwC2sAXk2 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.cPwC2sAXk2 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.377 [2024-11-26 18:22:43.342533] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:55.377 [2024-11-26 18:22:43.342727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.377 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.377 [2024-11-26 18:22:43.358574] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:55.635 nvme0n1 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.635 [ 00:26:55.635 { 00:26:55.635 "name": "nvme0n1", 00:26:55.635 "aliases": [ 00:26:55.635 "466616b3-469f-4e7f-9f4f-ac6fc8147202" 00:26:55.635 ], 00:26:55.635 "product_name": "NVMe disk", 00:26:55.635 "block_size": 512, 00:26:55.635 "num_blocks": 2097152, 00:26:55.635 "uuid": "466616b3-469f-4e7f-9f4f-ac6fc8147202", 00:26:55.635 "numa_id": 0, 00:26:55.635 "assigned_rate_limits": { 00:26:55.635 "rw_ios_per_sec": 0, 00:26:55.635 "rw_mbytes_per_sec": 0, 00:26:55.635 "r_mbytes_per_sec": 0, 00:26:55.635 "w_mbytes_per_sec": 0 00:26:55.635 }, 00:26:55.635 "claimed": false, 00:26:55.635 "zoned": false, 00:26:55.635 "supported_io_types": { 00:26:55.635 "read": true, 00:26:55.635 "write": true, 00:26:55.635 "unmap": false, 00:26:55.635 "flush": true, 00:26:55.635 "reset": true, 00:26:55.635 "nvme_admin": true, 00:26:55.635 "nvme_io": true, 00:26:55.635 "nvme_io_md": false, 00:26:55.635 "write_zeroes": true, 00:26:55.635 "zcopy": false, 00:26:55.635 "get_zone_info": false, 00:26:55.635 "zone_management": false, 00:26:55.635 "zone_append": false, 00:26:55.635 "compare": true, 00:26:55.635 "compare_and_write": true, 00:26:55.635 "abort": true, 00:26:55.635 "seek_hole": false, 00:26:55.635 "seek_data": false, 00:26:55.635 "copy": true, 00:26:55.635 "nvme_iov_md": false 00:26:55.635 }, 00:26:55.635 "memory_domains": [ 00:26:55.635 { 00:26:55.635 "dma_device_id": "system", 00:26:55.635 "dma_device_type": 1 00:26:55.635 } 00:26:55.635 ], 00:26:55.635 "driver_specific": { 00:26:55.635 "nvme": [ 00:26:55.635 { 00:26:55.635 "trid": { 00:26:55.635 "trtype": "TCP", 00:26:55.635 "adrfam": "IPv4", 00:26:55.635 "traddr": "10.0.0.2", 00:26:55.635 "trsvcid": "4421", 00:26:55.635 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:55.635 }, 00:26:55.635 "ctrlr_data": { 00:26:55.635 "cntlid": 3, 00:26:55.635 "vendor_id": "0x8086", 00:26:55.635 "model_number": "SPDK bdev Controller", 00:26:55.635 "serial_number": "00000000000000000000", 00:26:55.635 "firmware_revision": "25.01", 00:26:55.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.635 "oacs": { 00:26:55.635 "security": 0, 00:26:55.635 "format": 0, 00:26:55.635 "firmware": 0, 00:26:55.635 "ns_manage": 0 00:26:55.635 }, 00:26:55.635 "multi_ctrlr": true, 00:26:55.635 "ana_reporting": false 00:26:55.635 }, 00:26:55.635 "vs": { 00:26:55.635 "nvme_version": "1.3" 00:26:55.635 }, 00:26:55.635 "ns_data": { 00:26:55.635 "id": 1, 00:26:55.635 "can_share": true 00:26:55.635 } 00:26:55.635 } 00:26:55.635 ], 00:26:55.635 "mp_policy": "active_passive" 00:26:55.635 } 00:26:55.635 } 00:26:55.635 ] 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.cPwC2sAXk2 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.635 rmmod nvme_tcp 00:26:55.635 rmmod nvme_fabrics 00:26:55.635 rmmod nvme_keyring 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 669333 ']' 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 669333 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 669333 ']' 00:26:55.635 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 669333 00:26:55.636 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:55.636 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.636 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 669333 00:26:55.636 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.636 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.636 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 669333' 00:26:55.636 killing process with pid 669333 00:26:55.636 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 669333 00:26:55.636 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 669333 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:55.894 18:22:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.805 18:22:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.805 00:26:57.805 real 0m5.673s 00:26:57.805 user 0m2.170s 00:26:57.805 sys 0m1.948s 00:26:57.805 18:22:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:58.064 ************************************ 00:26:58.064 END TEST nvmf_async_init 00:26:58.064 ************************************ 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.064 ************************************ 00:26:58.064 START TEST dma 00:26:58.064 ************************************ 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:58.064 * Looking for test storage... 00:26:58.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.064 --rc genhtml_branch_coverage=1 00:26:58.064 --rc genhtml_function_coverage=1 00:26:58.064 --rc genhtml_legend=1 00:26:58.064 --rc geninfo_all_blocks=1 00:26:58.064 --rc geninfo_unexecuted_blocks=1 00:26:58.064 00:26:58.064 ' 00:26:58.064 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.064 --rc genhtml_branch_coverage=1 00:26:58.064 --rc genhtml_function_coverage=1 00:26:58.064 --rc genhtml_legend=1 00:26:58.064 --rc geninfo_all_blocks=1 00:26:58.065 --rc geninfo_unexecuted_blocks=1 00:26:58.065 00:26:58.065 ' 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.065 --rc genhtml_branch_coverage=1 00:26:58.065 --rc genhtml_function_coverage=1 00:26:58.065 --rc genhtml_legend=1 00:26:58.065 --rc geninfo_all_blocks=1 00:26:58.065 --rc geninfo_unexecuted_blocks=1 00:26:58.065 00:26:58.065 ' 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.065 --rc genhtml_branch_coverage=1 00:26:58.065 --rc genhtml_function_coverage=1 00:26:58.065 --rc genhtml_legend=1 00:26:58.065 --rc geninfo_all_blocks=1 00:26:58.065 --rc geninfo_unexecuted_blocks=1 00:26:58.065 00:26:58.065 ' 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.065 18:22:45 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:58.065 00:26:58.065 real 0m0.155s 00:26:58.065 user 0m0.112s 00:26:58.065 sys 0m0.052s 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:58.065 ************************************ 00:26:58.065 END TEST dma 00:26:58.065 ************************************ 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.065 ************************************ 00:26:58.065 START TEST nvmf_identify 00:26:58.065 ************************************ 00:26:58.065 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:58.324 * Looking for test storage... 00:26:58.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:58.324 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.325 --rc genhtml_branch_coverage=1 00:26:58.325 --rc genhtml_function_coverage=1 00:26:58.325 --rc genhtml_legend=1 00:26:58.325 --rc geninfo_all_blocks=1 00:26:58.325 --rc geninfo_unexecuted_blocks=1 00:26:58.325 00:26:58.325 ' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.325 --rc genhtml_branch_coverage=1 00:26:58.325 --rc genhtml_function_coverage=1 00:26:58.325 --rc genhtml_legend=1 00:26:58.325 --rc geninfo_all_blocks=1 00:26:58.325 --rc geninfo_unexecuted_blocks=1 00:26:58.325 00:26:58.325 ' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.325 --rc genhtml_branch_coverage=1 00:26:58.325 --rc genhtml_function_coverage=1 00:26:58.325 --rc genhtml_legend=1 00:26:58.325 --rc geninfo_all_blocks=1 00:26:58.325 --rc geninfo_unexecuted_blocks=1 00:26:58.325 00:26:58.325 ' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.325 --rc genhtml_branch_coverage=1 00:26:58.325 --rc genhtml_function_coverage=1 00:26:58.325 --rc genhtml_legend=1 00:26:58.325 --rc geninfo_all_blocks=1 00:26:58.325 --rc geninfo_unexecuted_blocks=1 00:26:58.325 00:26:58.325 ' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:58.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.325 18:22:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:00.859 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:00.859 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:00.859 Found net devices under 0000:09:00.0: cvl_0_0 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:00.859 Found net devices under 0000:09:00.1: cvl_0_1 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:27:00.859 00:27:00.859 --- 10.0.0.2 ping statistics --- 00:27:00.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.859 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:00.859 00:27:00.859 --- 10.0.0.1 ping statistics --- 00:27:00.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.859 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.859 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=671592 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 671592 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 671592 ']' 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:00.860 [2024-11-26 18:22:48.604324] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:27:00.860 [2024-11-26 18:22:48.604419] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.860 [2024-11-26 18:22:48.680604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.860 [2024-11-26 18:22:48.742005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.860 [2024-11-26 18:22:48.742056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.860 [2024-11-26 18:22:48.742085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.860 [2024-11-26 18:22:48.742097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.860 [2024-11-26 18:22:48.742107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.860 [2024-11-26 18:22:48.743775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.860 [2024-11-26 18:22:48.743858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.860 [2024-11-26 18:22:48.743801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.860 [2024-11-26 18:22:48.743862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.860 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:00.860 [2024-11-26 18:22:48.865844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 Malloc0 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 [2024-11-26 18:22:48.950614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.119 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 [ 00:27:01.119 { 00:27:01.119 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:01.119 "subtype": "Discovery", 00:27:01.119 "listen_addresses": [ 00:27:01.119 { 00:27:01.119 "trtype": "TCP", 00:27:01.119 "adrfam": "IPv4", 00:27:01.119 "traddr": "10.0.0.2", 00:27:01.119 "trsvcid": "4420" 00:27:01.119 } 00:27:01.119 ], 00:27:01.119 "allow_any_host": true, 00:27:01.119 "hosts": [] 00:27:01.119 }, 00:27:01.119 { 00:27:01.119 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.119 "subtype": "NVMe", 00:27:01.119 "listen_addresses": [ 00:27:01.119 { 00:27:01.119 "trtype": "TCP", 00:27:01.119 "adrfam": "IPv4", 00:27:01.119 "traddr": "10.0.0.2", 00:27:01.119 "trsvcid": "4420" 00:27:01.119 } 00:27:01.119 ], 00:27:01.119 "allow_any_host": true, 00:27:01.119 "hosts": [], 00:27:01.119 "serial_number": "SPDK00000000000001", 00:27:01.119 "model_number": "SPDK bdev Controller", 00:27:01.119 "max_namespaces": 32, 00:27:01.119 "min_cntlid": 1, 00:27:01.120 "max_cntlid": 65519, 00:27:01.120 "namespaces": [ 00:27:01.120 { 00:27:01.120 "nsid": 1, 00:27:01.120 "bdev_name": "Malloc0", 00:27:01.120 "name": "Malloc0", 00:27:01.120 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:01.120 "eui64": "ABCDEF0123456789", 00:27:01.120 "uuid": "b1482f73-92f1-484d-a2c8-2a91ff9bb880" 00:27:01.120 } 00:27:01.120 ] 00:27:01.120 } 00:27:01.120 ] 00:27:01.120 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.120 18:22:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:01.120 [2024-11-26 18:22:48.989700] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:27:01.120 [2024-11-26 18:22:48.989741] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671625 ] 00:27:01.120 [2024-11-26 18:22:49.046345] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:27:01.120 [2024-11-26 18:22:49.046405] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:01.120 [2024-11-26 18:22:49.046417] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:01.120 [2024-11-26 18:22:49.046439] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:01.120 [2024-11-26 18:22:49.046454] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:01.120 [2024-11-26 18:22:49.050755] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:27:01.120 [2024-11-26 18:22:49.050821] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c91690 0 00:27:01.120 [2024-11-26 18:22:49.057317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:01.120 [2024-11-26 18:22:49.057340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:01.120 [2024-11-26 18:22:49.057348] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:01.120 [2024-11-26 18:22:49.057355] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:01.120 [2024-11-26 18:22:49.057415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.057428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.057436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.120 [2024-11-26 18:22:49.057453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:01.120 [2024-11-26 18:22:49.057481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.120 [2024-11-26 18:22:49.065333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.120 [2024-11-26 18:22:49.065351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.120 [2024-11-26 18:22:49.065359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.120 [2024-11-26 18:22:49.065386] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:01.120 [2024-11-26 18:22:49.065398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:27:01.120 [2024-11-26 18:22:49.065408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:27:01.120 [2024-11-26 18:22:49.065432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.120 [2024-11-26 18:22:49.065463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.120 [2024-11-26 18:22:49.065488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.120 [2024-11-26 18:22:49.065605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.120 [2024-11-26 18:22:49.065618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.120 [2024-11-26 18:22:49.065625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.120 [2024-11-26 18:22:49.065646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:27:01.120 [2024-11-26 18:22:49.065661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:27:01.120 [2024-11-26 18:22:49.065674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.120 [2024-11-26 18:22:49.065699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.120 [2024-11-26 18:22:49.065721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.120 [2024-11-26 18:22:49.065795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.120 [2024-11-26 18:22:49.065808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.120 [2024-11-26 18:22:49.065815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.120 [2024-11-26 18:22:49.065831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:27:01.120 [2024-11-26 18:22:49.065845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:01.120 [2024-11-26 18:22:49.065857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065865] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.065872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.120 [2024-11-26 18:22:49.065882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.120 [2024-11-26 18:22:49.065904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.120 [2024-11-26 18:22:49.065984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.120 [2024-11-26 18:22:49.065999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.120 [2024-11-26 18:22:49.066006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.120 [2024-11-26 18:22:49.066022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:01.120 [2024-11-26 18:22:49.066039] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066048] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.120 [2024-11-26 18:22:49.066066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.120 [2024-11-26 18:22:49.066092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.120 [2024-11-26 18:22:49.066164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.120 [2024-11-26 18:22:49.066176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.120 [2024-11-26 18:22:49.066184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.120 [2024-11-26 18:22:49.066199] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:01.120 [2024-11-26 18:22:49.066208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:01.120 [2024-11-26 18:22:49.066221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:01.120 [2024-11-26 18:22:49.066332] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:27:01.120 [2024-11-26 18:22:49.066343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:01.120 [2024-11-26 18:22:49.066357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.120 [2024-11-26 18:22:49.066383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.120 [2024-11-26 18:22:49.066405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.120 [2024-11-26 18:22:49.066496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.120 [2024-11-26 18:22:49.066510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.120 [2024-11-26 18:22:49.066517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.120 [2024-11-26 18:22:49.066533] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:01.120 [2024-11-26 18:22:49.066549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.120 [2024-11-26 18:22:49.066576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.120 [2024-11-26 18:22:49.066597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.120 [2024-11-26 18:22:49.066673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.120 [2024-11-26 18:22:49.066687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.120 [2024-11-26 18:22:49.066694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.120 [2024-11-26 18:22:49.066701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.120 [2024-11-26 18:22:49.066709] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:01.121 [2024-11-26 18:22:49.066718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:01.121 [2024-11-26 18:22:49.066731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:27:01.121 [2024-11-26 18:22:49.066758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:01.121 [2024-11-26 18:22:49.066775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.066783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.066794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.121 [2024-11-26 18:22:49.066816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.121 [2024-11-26 18:22:49.066930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.121 [2024-11-26 18:22:49.066944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.121 [2024-11-26 18:22:49.066952] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.066959] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c91690): datao=0, datal=4096, cccid=0 00:27:01.121 [2024-11-26 18:22:49.066967] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf3100) on tqpair(0x1c91690): expected_datao=0, payload_size=4096 00:27:01.121 [2024-11-26 18:22:49.066974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.066992] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067001] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.121 [2024-11-26 18:22:49.067039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.121 [2024-11-26 18:22:49.067046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.121 [2024-11-26 18:22:49.067066] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:27:01.121 [2024-11-26 18:22:49.067075] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:27:01.121 [2024-11-26 18:22:49.067082] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:27:01.121 [2024-11-26 18:22:49.067091] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:27:01.121 [2024-11-26 18:22:49.067099] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:27:01.121 [2024-11-26 18:22:49.067107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:27:01.121 [2024-11-26 18:22:49.067122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:01.121 [2024-11-26 18:22:49.067134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.067160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:01.121 [2024-11-26 18:22:49.067181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.121 [2024-11-26 18:22:49.067267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.121 [2024-11-26 18:22:49.067280] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.121 [2024-11-26 18:22:49.067287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.121 [2024-11-26 18:22:49.067318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.067346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.121 [2024-11-26 18:22:49.067357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.067380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.121 [2024-11-26 18:22:49.067389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.067412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.121 [2024-11-26 18:22:49.067422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.067444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.121 [2024-11-26 18:22:49.067453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:01.121 [2024-11-26 18:22:49.067473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:01.121 [2024-11-26 18:22:49.067486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.067505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.121 [2024-11-26 18:22:49.067528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3100, cid 0, qid 0 00:27:01.121 [2024-11-26 18:22:49.067541] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3280, cid 1, qid 0 00:27:01.121 [2024-11-26 18:22:49.067549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3400, cid 2, qid 0 00:27:01.121 [2024-11-26 18:22:49.067557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.121 [2024-11-26 18:22:49.067565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3700, cid 4, qid 0 00:27:01.121 [2024-11-26 18:22:49.067671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.121 [2024-11-26 18:22:49.067685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.121 [2024-11-26 18:22:49.067692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3700) on tqpair=0x1c91690 00:27:01.121 [2024-11-26 18:22:49.067708] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:27:01.121 [2024-11-26 18:22:49.067716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:27:01.121 [2024-11-26 18:22:49.067734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.067760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.121 [2024-11-26 18:22:49.067781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3700, cid 4, qid 0 00:27:01.121 [2024-11-26 18:22:49.067874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.121 [2024-11-26 18:22:49.067887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.121 [2024-11-26 18:22:49.067894] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067900] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c91690): datao=0, datal=4096, cccid=4 00:27:01.121 [2024-11-26 18:22:49.067908] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf3700) on tqpair(0x1c91690): expected_datao=0, payload_size=4096 00:27:01.121 [2024-11-26 18:22:49.067915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067926] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067933] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.121 [2024-11-26 18:22:49.067955] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.121 [2024-11-26 18:22:49.067961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.067968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3700) on tqpair=0x1c91690 00:27:01.121 [2024-11-26 18:22:49.067987] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:27:01.121 [2024-11-26 18:22:49.068022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.068034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.068045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.121 [2024-11-26 18:22:49.068056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.068064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.068070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c91690) 00:27:01.121 [2024-11-26 18:22:49.068080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.121 [2024-11-26 18:22:49.068107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3700, cid 4, qid 0 00:27:01.121 [2024-11-26 18:22:49.068119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3880, cid 5, qid 0 00:27:01.121 [2024-11-26 18:22:49.068254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.121 [2024-11-26 18:22:49.068268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.121 [2024-11-26 18:22:49.068275] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.068281] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c91690): datao=0, datal=1024, cccid=4 00:27:01.121 [2024-11-26 18:22:49.068289] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf3700) on tqpair(0x1c91690): expected_datao=0, payload_size=1024 00:27:01.121 [2024-11-26 18:22:49.068296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.121 [2024-11-26 18:22:49.068314] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.068323] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.068332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.122 [2024-11-26 18:22:49.068341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.122 [2024-11-26 18:22:49.068351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.068359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3880) on tqpair=0x1c91690 00:27:01.122 [2024-11-26 18:22:49.108402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.122 [2024-11-26 18:22:49.108422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.122 [2024-11-26 18:22:49.108430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3700) on tqpair=0x1c91690 00:27:01.122 [2024-11-26 18:22:49.108455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c91690) 00:27:01.122 [2024-11-26 18:22:49.108476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.122 [2024-11-26 18:22:49.108507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3700, cid 4, qid 0 00:27:01.122 [2024-11-26 18:22:49.108641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.122 [2024-11-26 18:22:49.108656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.122 [2024-11-26 18:22:49.108663] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108669] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c91690): datao=0, datal=3072, cccid=4 00:27:01.122 [2024-11-26 18:22:49.108677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf3700) on tqpair(0x1c91690): expected_datao=0, payload_size=3072 00:27:01.122 [2024-11-26 18:22:49.108684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108694] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108702] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.122 [2024-11-26 18:22:49.108723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.122 [2024-11-26 18:22:49.108730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3700) on tqpair=0x1c91690 00:27:01.122 [2024-11-26 18:22:49.108751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c91690) 00:27:01.122 [2024-11-26 18:22:49.108770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.122 [2024-11-26 18:22:49.108799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3700, cid 4, qid 0 00:27:01.122 [2024-11-26 18:22:49.108896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.122 [2024-11-26 18:22:49.108910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.122 [2024-11-26 18:22:49.108917] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108923] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c91690): datao=0, datal=8, cccid=4 00:27:01.122 [2024-11-26 18:22:49.108931] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cf3700) on tqpair(0x1c91690): expected_datao=0, payload_size=8 00:27:01.122 [2024-11-26 18:22:49.108938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108948] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.122 [2024-11-26 18:22:49.108955] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.383 [2024-11-26 18:22:49.149388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.383 [2024-11-26 18:22:49.149407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.383 [2024-11-26 18:22:49.149414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.383 [2024-11-26 18:22:49.149426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3700) on tqpair=0x1c91690 00:27:01.383 ===================================================== 00:27:01.383 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:01.383 ===================================================== 00:27:01.383 Controller Capabilities/Features 00:27:01.383 ================================ 00:27:01.383 Vendor ID: 0000 00:27:01.383 Subsystem Vendor ID: 0000 00:27:01.383 Serial Number: .................... 00:27:01.383 Model Number: ........................................ 00:27:01.383 Firmware Version: 25.01 00:27:01.383 Recommended Arb Burst: 0 00:27:01.383 IEEE OUI Identifier: 00 00 00 00:27:01.383 Multi-path I/O 00:27:01.383 May have multiple subsystem ports: No 00:27:01.383 May have multiple controllers: No 00:27:01.383 Associated with SR-IOV VF: No 00:27:01.383 Max Data Transfer Size: 131072 00:27:01.383 Max Number of Namespaces: 0 00:27:01.383 Max Number of I/O Queues: 1024 00:27:01.383 NVMe Specification Version (VS): 1.3 00:27:01.383 NVMe Specification Version (Identify): 1.3 00:27:01.383 Maximum Queue Entries: 128 00:27:01.383 Contiguous Queues Required: Yes 00:27:01.384 Arbitration Mechanisms Supported 00:27:01.384 Weighted Round Robin: Not Supported 00:27:01.384 Vendor Specific: Not Supported 00:27:01.384 Reset Timeout: 15000 ms 00:27:01.384 Doorbell Stride: 4 bytes 00:27:01.384 NVM Subsystem Reset: Not Supported 00:27:01.384 Command Sets Supported 00:27:01.384 NVM Command Set: Supported 00:27:01.384 Boot Partition: Not Supported 00:27:01.384 Memory Page Size Minimum: 4096 bytes 00:27:01.384 Memory Page Size Maximum: 4096 bytes 00:27:01.384 Persistent Memory Region: Not Supported 00:27:01.384 Optional Asynchronous Events Supported 00:27:01.384 Namespace Attribute Notices: Not Supported 00:27:01.384 Firmware Activation Notices: Not Supported 00:27:01.384 ANA Change Notices: Not Supported 00:27:01.384 PLE Aggregate Log Change Notices: Not Supported 00:27:01.384 LBA Status Info Alert Notices: Not Supported 00:27:01.384 EGE Aggregate Log Change Notices: Not Supported 00:27:01.384 Normal NVM Subsystem Shutdown event: Not Supported 00:27:01.384 Zone Descriptor Change Notices: Not Supported 00:27:01.384 Discovery Log Change Notices: Supported 00:27:01.384 Controller Attributes 00:27:01.384 128-bit Host Identifier: Not Supported 00:27:01.384 Non-Operational Permissive Mode: Not Supported 00:27:01.384 NVM Sets: Not Supported 00:27:01.384 Read Recovery Levels: Not Supported 00:27:01.384 Endurance Groups: Not Supported 00:27:01.384 Predictable Latency Mode: Not Supported 00:27:01.384 Traffic Based Keep ALive: Not Supported 00:27:01.384 Namespace Granularity: Not Supported 00:27:01.384 SQ Associations: Not Supported 00:27:01.384 UUID List: Not Supported 00:27:01.384 Multi-Domain Subsystem: Not Supported 00:27:01.384 Fixed Capacity Management: Not Supported 00:27:01.384 Variable Capacity Management: Not Supported 00:27:01.384 Delete Endurance Group: Not Supported 00:27:01.384 Delete NVM Set: Not Supported 00:27:01.384 Extended LBA Formats Supported: Not Supported 00:27:01.384 Flexible Data Placement Supported: Not Supported 00:27:01.384 00:27:01.384 Controller Memory Buffer Support 00:27:01.384 ================================ 00:27:01.384 Supported: No 00:27:01.384 00:27:01.384 Persistent Memory Region Support 00:27:01.384 ================================ 00:27:01.384 Supported: No 00:27:01.384 00:27:01.384 Admin Command Set Attributes 00:27:01.384 ============================ 00:27:01.384 Security Send/Receive: Not Supported 00:27:01.384 Format NVM: Not Supported 00:27:01.384 Firmware Activate/Download: Not Supported 00:27:01.384 Namespace Management: Not Supported 00:27:01.384 Device Self-Test: Not Supported 00:27:01.384 Directives: Not Supported 00:27:01.384 NVMe-MI: Not Supported 00:27:01.384 Virtualization Management: Not Supported 00:27:01.384 Doorbell Buffer Config: Not Supported 00:27:01.384 Get LBA Status Capability: Not Supported 00:27:01.384 Command & Feature Lockdown Capability: Not Supported 00:27:01.384 Abort Command Limit: 1 00:27:01.384 Async Event Request Limit: 4 00:27:01.384 Number of Firmware Slots: N/A 00:27:01.384 Firmware Slot 1 Read-Only: N/A 00:27:01.384 Firmware Activation Without Reset: N/A 00:27:01.384 Multiple Update Detection Support: N/A 00:27:01.384 Firmware Update Granularity: No Information Provided 00:27:01.384 Per-Namespace SMART Log: No 00:27:01.384 Asymmetric Namespace Access Log Page: Not Supported 00:27:01.384 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:01.384 Command Effects Log Page: Not Supported 00:27:01.384 Get Log Page Extended Data: Supported 00:27:01.384 Telemetry Log Pages: Not Supported 00:27:01.384 Persistent Event Log Pages: Not Supported 00:27:01.384 Supported Log Pages Log Page: May Support 00:27:01.384 Commands Supported & Effects Log Page: Not Supported 00:27:01.384 Feature Identifiers & Effects Log Page:May Support 00:27:01.384 NVMe-MI Commands & Effects Log Page: May Support 00:27:01.384 Data Area 4 for Telemetry Log: Not Supported 00:27:01.384 Error Log Page Entries Supported: 128 00:27:01.384 Keep Alive: Not Supported 00:27:01.384 00:27:01.384 NVM Command Set Attributes 00:27:01.384 ========================== 00:27:01.384 Submission Queue Entry Size 00:27:01.384 Max: 1 00:27:01.384 Min: 1 00:27:01.384 Completion Queue Entry Size 00:27:01.384 Max: 1 00:27:01.384 Min: 1 00:27:01.384 Number of Namespaces: 0 00:27:01.384 Compare Command: Not Supported 00:27:01.384 Write Uncorrectable Command: Not Supported 00:27:01.384 Dataset Management Command: Not Supported 00:27:01.384 Write Zeroes Command: Not Supported 00:27:01.384 Set Features Save Field: Not Supported 00:27:01.384 Reservations: Not Supported 00:27:01.384 Timestamp: Not Supported 00:27:01.384 Copy: Not Supported 00:27:01.384 Volatile Write Cache: Not Present 00:27:01.384 Atomic Write Unit (Normal): 1 00:27:01.384 Atomic Write Unit (PFail): 1 00:27:01.384 Atomic Compare & Write Unit: 1 00:27:01.384 Fused Compare & Write: Supported 00:27:01.384 Scatter-Gather List 00:27:01.384 SGL Command Set: Supported 00:27:01.384 SGL Keyed: Supported 00:27:01.384 SGL Bit Bucket Descriptor: Not Supported 00:27:01.384 SGL Metadata Pointer: Not Supported 00:27:01.384 Oversized SGL: Not Supported 00:27:01.384 SGL Metadata Address: Not Supported 00:27:01.384 SGL Offset: Supported 00:27:01.384 Transport SGL Data Block: Not Supported 00:27:01.384 Replay Protected Memory Block: Not Supported 00:27:01.384 00:27:01.384 Firmware Slot Information 00:27:01.384 ========================= 00:27:01.384 Active slot: 0 00:27:01.384 00:27:01.384 00:27:01.384 Error Log 00:27:01.384 ========= 00:27:01.384 00:27:01.384 Active Namespaces 00:27:01.384 ================= 00:27:01.384 Discovery Log Page 00:27:01.384 ================== 00:27:01.384 Generation Counter: 2 00:27:01.384 Number of Records: 2 00:27:01.384 Record Format: 0 00:27:01.384 00:27:01.384 Discovery Log Entry 0 00:27:01.384 ---------------------- 00:27:01.384 Transport Type: 3 (TCP) 00:27:01.384 Address Family: 1 (IPv4) 00:27:01.384 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:01.384 Entry Flags: 00:27:01.384 Duplicate Returned Information: 1 00:27:01.384 Explicit Persistent Connection Support for Discovery: 1 00:27:01.384 Transport Requirements: 00:27:01.384 Secure Channel: Not Required 00:27:01.384 Port ID: 0 (0x0000) 00:27:01.384 Controller ID: 65535 (0xffff) 00:27:01.384 Admin Max SQ Size: 128 00:27:01.384 Transport Service Identifier: 4420 00:27:01.384 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:01.384 Transport Address: 10.0.0.2 00:27:01.384 Discovery Log Entry 1 00:27:01.384 ---------------------- 00:27:01.384 Transport Type: 3 (TCP) 00:27:01.384 Address Family: 1 (IPv4) 00:27:01.384 Subsystem Type: 2 (NVM Subsystem) 00:27:01.384 Entry Flags: 00:27:01.384 Duplicate Returned Information: 0 00:27:01.384 Explicit Persistent Connection Support for Discovery: 0 00:27:01.384 Transport Requirements: 00:27:01.384 Secure Channel: Not Required 00:27:01.384 Port ID: 0 (0x0000) 00:27:01.384 Controller ID: 65535 (0xffff) 00:27:01.384 Admin Max SQ Size: 128 00:27:01.384 Transport Service Identifier: 4420 00:27:01.384 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:01.384 Transport Address: 10.0.0.2 [2024-11-26 18:22:49.149542] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:27:01.384 [2024-11-26 18:22:49.149564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3100) on tqpair=0x1c91690 00:27:01.384 [2024-11-26 18:22:49.149576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.384 [2024-11-26 18:22:49.149585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3280) on tqpair=0x1c91690 00:27:01.384 [2024-11-26 18:22:49.149593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.384 [2024-11-26 18:22:49.149601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3400) on tqpair=0x1c91690 00:27:01.384 [2024-11-26 18:22:49.149609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.384 [2024-11-26 18:22:49.149617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.384 [2024-11-26 18:22:49.149624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.384 [2024-11-26 18:22:49.149637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.384 [2024-11-26 18:22:49.149645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.384 [2024-11-26 18:22:49.149652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.384 [2024-11-26 18:22:49.149663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.384 [2024-11-26 18:22:49.149704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.384 [2024-11-26 18:22:49.149851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.149866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.149872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.149879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.149891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.149899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.149906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.149916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.149943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.150039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.150052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.150059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.150074] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:27:01.385 [2024-11-26 18:22:49.150081] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:27:01.385 [2024-11-26 18:22:49.150097] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.150122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.150148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.150223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.150237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.150244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150250] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.150267] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.150293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.150322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.150418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.150432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.150439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.150462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.150488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.150509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.150583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.150595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.150602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.150625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.150651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.150671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.150751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.150764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.150771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.150794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.150820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.150841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.150939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.150953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.150960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.150983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.150999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.151009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.151030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.151104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.151116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.151123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.151129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.151145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.151154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.151161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.151171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.151192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.151262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.151274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.151281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.151288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.155311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.155325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.155332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c91690) 00:27:01.385 [2024-11-26 18:22:49.155342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.385 [2024-11-26 18:22:49.155364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cf3580, cid 3, qid 0 00:27:01.385 [2024-11-26 18:22:49.155479] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.385 [2024-11-26 18:22:49.155492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.385 [2024-11-26 18:22:49.155499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.385 [2024-11-26 18:22:49.155505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cf3580) on tqpair=0x1c91690 00:27:01.385 [2024-11-26 18:22:49.155518] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:27:01.385 00:27:01.385 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:01.385 [2024-11-26 18:22:49.188536] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:27:01.385 [2024-11-26 18:22:49.188577] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid671632 ] 00:27:01.385 [2024-11-26 18:22:49.234832] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:27:01.385 [2024-11-26 18:22:49.234884] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:01.385 [2024-11-26 18:22:49.234895] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:01.385 [2024-11-26 18:22:49.234913] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:01.385 [2024-11-26 18:22:49.234925] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:01.385 [2024-11-26 18:22:49.238572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:27:01.385 [2024-11-26 18:22:49.238625] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1365690 0 00:27:01.385 [2024-11-26 18:22:49.246316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:01.385 [2024-11-26 18:22:49.246334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:01.385 [2024-11-26 18:22:49.246342] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:01.385 [2024-11-26 18:22:49.246348] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:01.386 [2024-11-26 18:22:49.246399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.246412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.246419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.386 [2024-11-26 18:22:49.246432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:01.386 [2024-11-26 18:22:49.246460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.386 [2024-11-26 18:22:49.253316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.386 [2024-11-26 18:22:49.253334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.386 [2024-11-26 18:22:49.253342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.386 [2024-11-26 18:22:49.253368] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:01.386 [2024-11-26 18:22:49.253380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:27:01.386 [2024-11-26 18:22:49.253390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:27:01.386 [2024-11-26 18:22:49.253410] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.386 [2024-11-26 18:22:49.253438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.386 [2024-11-26 18:22:49.253463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.386 [2024-11-26 18:22:49.253561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.386 [2024-11-26 18:22:49.253576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.386 [2024-11-26 18:22:49.253583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.386 [2024-11-26 18:22:49.253606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:27:01.386 [2024-11-26 18:22:49.253622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:27:01.386 [2024-11-26 18:22:49.253635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253642] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.386 [2024-11-26 18:22:49.253659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.386 [2024-11-26 18:22:49.253682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.386 [2024-11-26 18:22:49.253767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.386 [2024-11-26 18:22:49.253779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.386 [2024-11-26 18:22:49.253786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.386 [2024-11-26 18:22:49.253801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:27:01.386 [2024-11-26 18:22:49.253815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:27:01.386 [2024-11-26 18:22:49.253827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.386 [2024-11-26 18:22:49.253851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.386 [2024-11-26 18:22:49.253873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.386 [2024-11-26 18:22:49.253954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.386 [2024-11-26 18:22:49.253967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.386 [2024-11-26 18:22:49.253974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.253980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.386 [2024-11-26 18:22:49.253989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:01.386 [2024-11-26 18:22:49.254005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.386 [2024-11-26 18:22:49.254031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.386 [2024-11-26 18:22:49.254052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.386 [2024-11-26 18:22:49.254141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.386 [2024-11-26 18:22:49.254155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.386 [2024-11-26 18:22:49.254162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.386 [2024-11-26 18:22:49.254176] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:27:01.386 [2024-11-26 18:22:49.254184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:27:01.386 [2024-11-26 18:22:49.254202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:01.386 [2024-11-26 18:22:49.254313] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:27:01.386 [2024-11-26 18:22:49.254324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:01.386 [2024-11-26 18:22:49.254336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.386 [2024-11-26 18:22:49.254361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.386 [2024-11-26 18:22:49.254383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.386 [2024-11-26 18:22:49.254476] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.386 [2024-11-26 18:22:49.254490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.386 [2024-11-26 18:22:49.254497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254504] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.386 [2024-11-26 18:22:49.254512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:01.386 [2024-11-26 18:22:49.254528] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.386 [2024-11-26 18:22:49.254554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.386 [2024-11-26 18:22:49.254575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.386 [2024-11-26 18:22:49.254668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.386 [2024-11-26 18:22:49.254680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.386 [2024-11-26 18:22:49.254687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.386 [2024-11-26 18:22:49.254701] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:01.386 [2024-11-26 18:22:49.254710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:27:01.386 [2024-11-26 18:22:49.254723] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:27:01.386 [2024-11-26 18:22:49.254737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:27:01.386 [2024-11-26 18:22:49.254751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.386 [2024-11-26 18:22:49.254759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.254770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.387 [2024-11-26 18:22:49.254791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.387 [2024-11-26 18:22:49.254917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.387 [2024-11-26 18:22:49.254930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.387 [2024-11-26 18:22:49.254941] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.254948] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1365690): datao=0, datal=4096, cccid=0 00:27:01.387 [2024-11-26 18:22:49.254955] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c7100) on tqpair(0x1365690): expected_datao=0, payload_size=4096 00:27:01.387 [2024-11-26 18:22:49.254962] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.254979] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.254988] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.387 [2024-11-26 18:22:49.298347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.387 [2024-11-26 18:22:49.298354] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.387 [2024-11-26 18:22:49.298372] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:27:01.387 [2024-11-26 18:22:49.298381] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:27:01.387 [2024-11-26 18:22:49.298388] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:27:01.387 [2024-11-26 18:22:49.298395] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:27:01.387 [2024-11-26 18:22:49.298402] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:27:01.387 [2024-11-26 18:22:49.298410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.298424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.298452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298467] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.298478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:01.387 [2024-11-26 18:22:49.298502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.387 [2024-11-26 18:22:49.298585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.387 [2024-11-26 18:22:49.298600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.387 [2024-11-26 18:22:49.298607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.387 [2024-11-26 18:22:49.298624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.298648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.387 [2024-11-26 18:22:49.298658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298665] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.298680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.387 [2024-11-26 18:22:49.298697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298705] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.298720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.387 [2024-11-26 18:22:49.298730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.298751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.387 [2024-11-26 18:22:49.298760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.298779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.298792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.298800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.298810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.387 [2024-11-26 18:22:49.298833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7100, cid 0, qid 0 00:27:01.387 [2024-11-26 18:22:49.298845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7280, cid 1, qid 0 00:27:01.387 [2024-11-26 18:22:49.298853] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7400, cid 2, qid 0 00:27:01.387 [2024-11-26 18:22:49.298861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7580, cid 3, qid 0 00:27:01.387 [2024-11-26 18:22:49.298869] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7700, cid 4, qid 0 00:27:01.387 [2024-11-26 18:22:49.298985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.387 [2024-11-26 18:22:49.298997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.387 [2024-11-26 18:22:49.299004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.299011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7700) on tqpair=0x1365690 00:27:01.387 [2024-11-26 18:22:49.299019] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:27:01.387 [2024-11-26 18:22:49.299028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.299046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.299058] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.299068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.299075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.299082] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.299092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:01.387 [2024-11-26 18:22:49.299114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7700, cid 4, qid 0 00:27:01.387 [2024-11-26 18:22:49.299199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.387 [2024-11-26 18:22:49.299215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.387 [2024-11-26 18:22:49.299223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.299229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7700) on tqpair=0x1365690 00:27:01.387 [2024-11-26 18:22:49.299299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.299335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:01.387 [2024-11-26 18:22:49.299351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.299360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1365690) 00:27:01.387 [2024-11-26 18:22:49.299370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.387 [2024-11-26 18:22:49.299393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7700, cid 4, qid 0 00:27:01.387 [2024-11-26 18:22:49.299500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.387 [2024-11-26 18:22:49.299515] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.387 [2024-11-26 18:22:49.299522] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.299528] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1365690): datao=0, datal=4096, cccid=4 00:27:01.387 [2024-11-26 18:22:49.299535] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c7700) on tqpair(0x1365690): expected_datao=0, payload_size=4096 00:27:01.387 [2024-11-26 18:22:49.299542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.299560] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.299569] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.387 [2024-11-26 18:22:49.340382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.387 [2024-11-26 18:22:49.340401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.387 [2024-11-26 18:22:49.340408] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.340415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7700) on tqpair=0x1365690 00:27:01.388 [2024-11-26 18:22:49.340440] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:27:01.388 [2024-11-26 18:22:49.340458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.340476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.340490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.340498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1365690) 00:27:01.388 [2024-11-26 18:22:49.340509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.388 [2024-11-26 18:22:49.340533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7700, cid 4, qid 0 00:27:01.388 [2024-11-26 18:22:49.340651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.388 [2024-11-26 18:22:49.340663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.388 [2024-11-26 18:22:49.340670] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.340677] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1365690): datao=0, datal=4096, cccid=4 00:27:01.388 [2024-11-26 18:22:49.340684] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c7700) on tqpair(0x1365690): expected_datao=0, payload_size=4096 00:27:01.388 [2024-11-26 18:22:49.340695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.340713] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.340722] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.388 [2024-11-26 18:22:49.385349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.388 [2024-11-26 18:22:49.385356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7700) on tqpair=0x1365690 00:27:01.388 [2024-11-26 18:22:49.385379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1365690) 00:27:01.388 [2024-11-26 18:22:49.385447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.388 [2024-11-26 18:22:49.385471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7700, cid 4, qid 0 00:27:01.388 [2024-11-26 18:22:49.385574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.388 [2024-11-26 18:22:49.385589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.388 [2024-11-26 18:22:49.385596] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385603] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1365690): datao=0, datal=4096, cccid=4 00:27:01.388 [2024-11-26 18:22:49.385610] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c7700) on tqpair(0x1365690): expected_datao=0, payload_size=4096 00:27:01.388 [2024-11-26 18:22:49.385617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385628] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385635] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.388 [2024-11-26 18:22:49.385656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.388 [2024-11-26 18:22:49.385663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7700) on tqpair=0x1365690 00:27:01.388 [2024-11-26 18:22:49.385687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385755] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:27:01.388 [2024-11-26 18:22:49.385763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:27:01.388 [2024-11-26 18:22:49.385775] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:27:01.388 [2024-11-26 18:22:49.385794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1365690) 00:27:01.388 [2024-11-26 18:22:49.385814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.388 [2024-11-26 18:22:49.385825] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.385838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1365690) 00:27:01.388 [2024-11-26 18:22:49.385847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:01.388 [2024-11-26 18:22:49.385873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7700, cid 4, qid 0 00:27:01.388 [2024-11-26 18:22:49.385886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7880, cid 5, qid 0 00:27:01.388 [2024-11-26 18:22:49.385985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.388 [2024-11-26 18:22:49.385999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.388 [2024-11-26 18:22:49.386005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.386012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7700) on tqpair=0x1365690 00:27:01.388 [2024-11-26 18:22:49.386022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.388 [2024-11-26 18:22:49.386031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.388 [2024-11-26 18:22:49.386037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.386044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7880) on tqpair=0x1365690 00:27:01.388 [2024-11-26 18:22:49.386059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.386068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1365690) 00:27:01.388 [2024-11-26 18:22:49.386078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.388 [2024-11-26 18:22:49.386100] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7880, cid 5, qid 0 00:27:01.388 [2024-11-26 18:22:49.386193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.388 [2024-11-26 18:22:49.386207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.388 [2024-11-26 18:22:49.386214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.386221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7880) on tqpair=0x1365690 00:27:01.388 [2024-11-26 18:22:49.386236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.386245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1365690) 00:27:01.388 [2024-11-26 18:22:49.386256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.388 [2024-11-26 18:22:49.386276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7880, cid 5, qid 0 00:27:01.388 [2024-11-26 18:22:49.386383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.388 [2024-11-26 18:22:49.386397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.388 [2024-11-26 18:22:49.386403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.388 [2024-11-26 18:22:49.386410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7880) on tqpair=0x1365690 00:27:01.388 [2024-11-26 18:22:49.386425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.386439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1365690) 00:27:01.389 [2024-11-26 18:22:49.386450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.389 [2024-11-26 18:22:49.386471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7880, cid 5, qid 0 00:27:01.389 [2024-11-26 18:22:49.386566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.389 [2024-11-26 18:22:49.386581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.389 [2024-11-26 18:22:49.386587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.386594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7880) on tqpair=0x1365690 00:27:01.389 [2024-11-26 18:22:49.386618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.386629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1365690) 00:27:01.389 [2024-11-26 18:22:49.386640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.389 [2024-11-26 18:22:49.386652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.386660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1365690) 00:27:01.389 [2024-11-26 18:22:49.386669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.389 [2024-11-26 18:22:49.386680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.386688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1365690) 00:27:01.389 [2024-11-26 18:22:49.386697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.389 [2024-11-26 18:22:49.386708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.386716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1365690) 00:27:01.389 [2024-11-26 18:22:49.386725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.389 [2024-11-26 18:22:49.386747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7880, cid 5, qid 0 00:27:01.389 [2024-11-26 18:22:49.386758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7700, cid 4, qid 0 00:27:01.389 [2024-11-26 18:22:49.386766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7a00, cid 6, qid 0 00:27:01.389 [2024-11-26 18:22:49.386774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7b80, cid 7, qid 0 00:27:01.389 [2024-11-26 18:22:49.386942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.389 [2024-11-26 18:22:49.386954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.389 [2024-11-26 18:22:49.386961] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.386967] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1365690): datao=0, datal=8192, cccid=5 00:27:01.389 [2024-11-26 18:22:49.386974] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c7880) on tqpair(0x1365690): expected_datao=0, payload_size=8192 00:27:01.389 [2024-11-26 18:22:49.386981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.386999] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387008] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387020] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.389 [2024-11-26 18:22:49.387029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.389 [2024-11-26 18:22:49.387036] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387045] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1365690): datao=0, datal=512, cccid=4 00:27:01.389 [2024-11-26 18:22:49.387053] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c7700) on tqpair(0x1365690): expected_datao=0, payload_size=512 00:27:01.389 [2024-11-26 18:22:49.387060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387069] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387076] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387084] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.389 [2024-11-26 18:22:49.387093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.389 [2024-11-26 18:22:49.387099] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387105] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1365690): datao=0, datal=512, cccid=6 00:27:01.389 [2024-11-26 18:22:49.387112] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c7a00) on tqpair(0x1365690): expected_datao=0, payload_size=512 00:27:01.389 [2024-11-26 18:22:49.387119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387128] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387135] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:01.389 [2024-11-26 18:22:49.387152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:01.389 [2024-11-26 18:22:49.387158] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387164] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1365690): datao=0, datal=4096, cccid=7 00:27:01.389 [2024-11-26 18:22:49.387171] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13c7b80) on tqpair(0x1365690): expected_datao=0, payload_size=4096 00:27:01.389 [2024-11-26 18:22:49.387178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387187] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387194] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.389 [2024-11-26 18:22:49.387214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.389 [2024-11-26 18:22:49.387221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7880) on tqpair=0x1365690 00:27:01.389 [2024-11-26 18:22:49.387248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.389 [2024-11-26 18:22:49.387260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.389 [2024-11-26 18:22:49.387266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7700) on tqpair=0x1365690 00:27:01.389 [2024-11-26 18:22:49.387287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.389 [2024-11-26 18:22:49.387324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.389 [2024-11-26 18:22:49.387332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7a00) on tqpair=0x1365690 00:27:01.389 [2024-11-26 18:22:49.387350] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.389 [2024-11-26 18:22:49.387359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.389 [2024-11-26 18:22:49.387380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.389 [2024-11-26 18:22:49.387387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7b80) on tqpair=0x1365690 00:27:01.389 ===================================================== 00:27:01.389 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.389 ===================================================== 00:27:01.389 Controller Capabilities/Features 00:27:01.389 ================================ 00:27:01.389 Vendor ID: 8086 00:27:01.389 Subsystem Vendor ID: 8086 00:27:01.389 Serial Number: SPDK00000000000001 00:27:01.389 Model Number: SPDK bdev Controller 00:27:01.389 Firmware Version: 25.01 00:27:01.389 Recommended Arb Burst: 6 00:27:01.389 IEEE OUI Identifier: e4 d2 5c 00:27:01.389 Multi-path I/O 00:27:01.389 May have multiple subsystem ports: Yes 00:27:01.389 May have multiple controllers: Yes 00:27:01.389 Associated with SR-IOV VF: No 00:27:01.389 Max Data Transfer Size: 131072 00:27:01.389 Max Number of Namespaces: 32 00:27:01.389 Max Number of I/O Queues: 127 00:27:01.389 NVMe Specification Version (VS): 1.3 00:27:01.389 NVMe Specification Version (Identify): 1.3 00:27:01.389 Maximum Queue Entries: 128 00:27:01.389 Contiguous Queues Required: Yes 00:27:01.389 Arbitration Mechanisms Supported 00:27:01.389 Weighted Round Robin: Not Supported 00:27:01.389 Vendor Specific: Not Supported 00:27:01.389 Reset Timeout: 15000 ms 00:27:01.389 Doorbell Stride: 4 bytes 00:27:01.389 NVM Subsystem Reset: Not Supported 00:27:01.389 Command Sets Supported 00:27:01.389 NVM Command Set: Supported 00:27:01.389 Boot Partition: Not Supported 00:27:01.389 Memory Page Size Minimum: 4096 bytes 00:27:01.389 Memory Page Size Maximum: 4096 bytes 00:27:01.389 Persistent Memory Region: Not Supported 00:27:01.389 Optional Asynchronous Events Supported 00:27:01.389 Namespace Attribute Notices: Supported 00:27:01.389 Firmware Activation Notices: Not Supported 00:27:01.389 ANA Change Notices: Not Supported 00:27:01.389 PLE Aggregate Log Change Notices: Not Supported 00:27:01.389 LBA Status Info Alert Notices: Not Supported 00:27:01.389 EGE Aggregate Log Change Notices: Not Supported 00:27:01.389 Normal NVM Subsystem Shutdown event: Not Supported 00:27:01.389 Zone Descriptor Change Notices: Not Supported 00:27:01.389 Discovery Log Change Notices: Not Supported 00:27:01.389 Controller Attributes 00:27:01.389 128-bit Host Identifier: Supported 00:27:01.389 Non-Operational Permissive Mode: Not Supported 00:27:01.389 NVM Sets: Not Supported 00:27:01.389 Read Recovery Levels: Not Supported 00:27:01.389 Endurance Groups: Not Supported 00:27:01.389 Predictable Latency Mode: Not Supported 00:27:01.389 Traffic Based Keep ALive: Not Supported 00:27:01.389 Namespace Granularity: Not Supported 00:27:01.389 SQ Associations: Not Supported 00:27:01.390 UUID List: Not Supported 00:27:01.390 Multi-Domain Subsystem: Not Supported 00:27:01.390 Fixed Capacity Management: Not Supported 00:27:01.390 Variable Capacity Management: Not Supported 00:27:01.390 Delete Endurance Group: Not Supported 00:27:01.390 Delete NVM Set: Not Supported 00:27:01.390 Extended LBA Formats Supported: Not Supported 00:27:01.390 Flexible Data Placement Supported: Not Supported 00:27:01.390 00:27:01.390 Controller Memory Buffer Support 00:27:01.390 ================================ 00:27:01.390 Supported: No 00:27:01.390 00:27:01.390 Persistent Memory Region Support 00:27:01.390 ================================ 00:27:01.390 Supported: No 00:27:01.390 00:27:01.390 Admin Command Set Attributes 00:27:01.390 ============================ 00:27:01.390 Security Send/Receive: Not Supported 00:27:01.390 Format NVM: Not Supported 00:27:01.390 Firmware Activate/Download: Not Supported 00:27:01.390 Namespace Management: Not Supported 00:27:01.390 Device Self-Test: Not Supported 00:27:01.390 Directives: Not Supported 00:27:01.390 NVMe-MI: Not Supported 00:27:01.390 Virtualization Management: Not Supported 00:27:01.390 Doorbell Buffer Config: Not Supported 00:27:01.390 Get LBA Status Capability: Not Supported 00:27:01.390 Command & Feature Lockdown Capability: Not Supported 00:27:01.390 Abort Command Limit: 4 00:27:01.390 Async Event Request Limit: 4 00:27:01.390 Number of Firmware Slots: N/A 00:27:01.390 Firmware Slot 1 Read-Only: N/A 00:27:01.390 Firmware Activation Without Reset: N/A 00:27:01.390 Multiple Update Detection Support: N/A 00:27:01.390 Firmware Update Granularity: No Information Provided 00:27:01.390 Per-Namespace SMART Log: No 00:27:01.390 Asymmetric Namespace Access Log Page: Not Supported 00:27:01.390 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:01.390 Command Effects Log Page: Supported 00:27:01.390 Get Log Page Extended Data: Supported 00:27:01.390 Telemetry Log Pages: Not Supported 00:27:01.390 Persistent Event Log Pages: Not Supported 00:27:01.390 Supported Log Pages Log Page: May Support 00:27:01.390 Commands Supported & Effects Log Page: Not Supported 00:27:01.390 Feature Identifiers & Effects Log Page:May Support 00:27:01.390 NVMe-MI Commands & Effects Log Page: May Support 00:27:01.390 Data Area 4 for Telemetry Log: Not Supported 00:27:01.390 Error Log Page Entries Supported: 128 00:27:01.390 Keep Alive: Supported 00:27:01.390 Keep Alive Granularity: 10000 ms 00:27:01.390 00:27:01.390 NVM Command Set Attributes 00:27:01.390 ========================== 00:27:01.390 Submission Queue Entry Size 00:27:01.390 Max: 64 00:27:01.390 Min: 64 00:27:01.390 Completion Queue Entry Size 00:27:01.390 Max: 16 00:27:01.390 Min: 16 00:27:01.390 Number of Namespaces: 32 00:27:01.390 Compare Command: Supported 00:27:01.390 Write Uncorrectable Command: Not Supported 00:27:01.390 Dataset Management Command: Supported 00:27:01.390 Write Zeroes Command: Supported 00:27:01.390 Set Features Save Field: Not Supported 00:27:01.390 Reservations: Supported 00:27:01.390 Timestamp: Not Supported 00:27:01.390 Copy: Supported 00:27:01.390 Volatile Write Cache: Present 00:27:01.390 Atomic Write Unit (Normal): 1 00:27:01.390 Atomic Write Unit (PFail): 1 00:27:01.390 Atomic Compare & Write Unit: 1 00:27:01.390 Fused Compare & Write: Supported 00:27:01.390 Scatter-Gather List 00:27:01.390 SGL Command Set: Supported 00:27:01.390 SGL Keyed: Supported 00:27:01.390 SGL Bit Bucket Descriptor: Not Supported 00:27:01.390 SGL Metadata Pointer: Not Supported 00:27:01.390 Oversized SGL: Not Supported 00:27:01.390 SGL Metadata Address: Not Supported 00:27:01.390 SGL Offset: Supported 00:27:01.390 Transport SGL Data Block: Not Supported 00:27:01.390 Replay Protected Memory Block: Not Supported 00:27:01.390 00:27:01.390 Firmware Slot Information 00:27:01.390 ========================= 00:27:01.390 Active slot: 1 00:27:01.390 Slot 1 Firmware Revision: 25.01 00:27:01.390 00:27:01.390 00:27:01.390 Commands Supported and Effects 00:27:01.390 ============================== 00:27:01.390 Admin Commands 00:27:01.390 -------------- 00:27:01.390 Get Log Page (02h): Supported 00:27:01.390 Identify (06h): Supported 00:27:01.390 Abort (08h): Supported 00:27:01.390 Set Features (09h): Supported 00:27:01.390 Get Features (0Ah): Supported 00:27:01.390 Asynchronous Event Request (0Ch): Supported 00:27:01.390 Keep Alive (18h): Supported 00:27:01.390 I/O Commands 00:27:01.390 ------------ 00:27:01.390 Flush (00h): Supported LBA-Change 00:27:01.390 Write (01h): Supported LBA-Change 00:27:01.390 Read (02h): Supported 00:27:01.390 Compare (05h): Supported 00:27:01.390 Write Zeroes (08h): Supported LBA-Change 00:27:01.390 Dataset Management (09h): Supported LBA-Change 00:27:01.390 Copy (19h): Supported LBA-Change 00:27:01.390 00:27:01.390 Error Log 00:27:01.390 ========= 00:27:01.390 00:27:01.390 Arbitration 00:27:01.390 =========== 00:27:01.390 Arbitration Burst: 1 00:27:01.390 00:27:01.390 Power Management 00:27:01.390 ================ 00:27:01.390 Number of Power States: 1 00:27:01.390 Current Power State: Power State #0 00:27:01.390 Power State #0: 00:27:01.390 Max Power: 0.00 W 00:27:01.390 Non-Operational State: Operational 00:27:01.390 Entry Latency: Not Reported 00:27:01.390 Exit Latency: Not Reported 00:27:01.390 Relative Read Throughput: 0 00:27:01.390 Relative Read Latency: 0 00:27:01.390 Relative Write Throughput: 0 00:27:01.390 Relative Write Latency: 0 00:27:01.390 Idle Power: Not Reported 00:27:01.390 Active Power: Not Reported 00:27:01.390 Non-Operational Permissive Mode: Not Supported 00:27:01.390 00:27:01.390 Health Information 00:27:01.390 ================== 00:27:01.390 Critical Warnings: 00:27:01.390 Available Spare Space: OK 00:27:01.390 Temperature: OK 00:27:01.390 Device Reliability: OK 00:27:01.390 Read Only: No 00:27:01.390 Volatile Memory Backup: OK 00:27:01.390 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:01.390 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:01.390 Available Spare: 0% 00:27:01.390 Available Spare Threshold: 0% 00:27:01.390 Life Percentage Used:[2024-11-26 18:22:49.387498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.390 [2024-11-26 18:22:49.387513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1365690) 00:27:01.390 [2024-11-26 18:22:49.387525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.390 [2024-11-26 18:22:49.387548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7b80, cid 7, qid 0 00:27:01.390 [2024-11-26 18:22:49.387652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.390 [2024-11-26 18:22:49.387665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.390 [2024-11-26 18:22:49.387672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.390 [2024-11-26 18:22:49.387678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7b80) on tqpair=0x1365690 00:27:01.390 [2024-11-26 18:22:49.387722] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:27:01.390 [2024-11-26 18:22:49.387742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7100) on tqpair=0x1365690 00:27:01.390 [2024-11-26 18:22:49.387752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.390 [2024-11-26 18:22:49.387761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7280) on tqpair=0x1365690 00:27:01.390 [2024-11-26 18:22:49.387769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.390 [2024-11-26 18:22:49.387776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7400) on tqpair=0x1365690 00:27:01.390 [2024-11-26 18:22:49.387784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.390 [2024-11-26 18:22:49.387792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7580) on tqpair=0x1365690 00:27:01.390 [2024-11-26 18:22:49.387799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.390 [2024-11-26 18:22:49.387811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.390 [2024-11-26 18:22:49.387818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.390 [2024-11-26 18:22:49.387825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1365690) 00:27:01.390 [2024-11-26 18:22:49.387835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.390 [2024-11-26 18:22:49.387857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7580, cid 3, qid 0 00:27:01.390 [2024-11-26 18:22:49.387951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.390 [2024-11-26 18:22:49.387964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.390 [2024-11-26 18:22:49.387971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.390 [2024-11-26 18:22:49.387977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7580) on tqpair=0x1365690 00:27:01.390 [2024-11-26 18:22:49.387988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.390 [2024-11-26 18:22:49.387996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.390 [2024-11-26 18:22:49.388002] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1365690) 00:27:01.391 [2024-11-26 18:22:49.388012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.391 [2024-11-26 18:22:49.388037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7580, cid 3, qid 0 00:27:01.391 [2024-11-26 18:22:49.388133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.391 [2024-11-26 18:22:49.388147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.391 [2024-11-26 18:22:49.388154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7580) on tqpair=0x1365690 00:27:01.391 [2024-11-26 18:22:49.388172] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:27:01.391 [2024-11-26 18:22:49.388181] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:27:01.391 [2024-11-26 18:22:49.388196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1365690) 00:27:01.391 [2024-11-26 18:22:49.388222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.391 [2024-11-26 18:22:49.388243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7580, cid 3, qid 0 00:27:01.391 [2024-11-26 18:22:49.388347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.391 [2024-11-26 18:22:49.388362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.391 [2024-11-26 18:22:49.388368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7580) on tqpair=0x1365690 00:27:01.391 [2024-11-26 18:22:49.388391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1365690) 00:27:01.391 [2024-11-26 18:22:49.388417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.391 [2024-11-26 18:22:49.388438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7580, cid 3, qid 0 00:27:01.391 [2024-11-26 18:22:49.388528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.391 [2024-11-26 18:22:49.388540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.391 [2024-11-26 18:22:49.388547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7580) on tqpair=0x1365690 00:27:01.391 [2024-11-26 18:22:49.388569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.391 [2024-11-26 18:22:49.388584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1365690) 00:27:01.391 [2024-11-26 18:22:49.388595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.391 [2024-11-26 18:22:49.388615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7580, cid 3, qid 0 00:27:01.648 [2024-11-26 18:22:49.392329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.648 [2024-11-26 18:22:49.392346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.648 [2024-11-26 18:22:49.392353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.648 [2024-11-26 18:22:49.392360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7580) on tqpair=0x1365690 00:27:01.648 [2024-11-26 18:22:49.392378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:01.648 [2024-11-26 18:22:49.392388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:01.648 [2024-11-26 18:22:49.392394] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1365690) 00:27:01.648 [2024-11-26 18:22:49.392405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.648 [2024-11-26 18:22:49.392427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13c7580, cid 3, qid 0 00:27:01.648 [2024-11-26 18:22:49.392520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:01.648 [2024-11-26 18:22:49.392533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:01.648 [2024-11-26 18:22:49.392540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:01.648 [2024-11-26 18:22:49.392553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13c7580) on tqpair=0x1365690 00:27:01.648 [2024-11-26 18:22:49.392567] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:27:01.648 0% 00:27:01.648 Data Units Read: 0 00:27:01.648 Data Units Written: 0 00:27:01.648 Host Read Commands: 0 00:27:01.648 Host Write Commands: 0 00:27:01.648 Controller Busy Time: 0 minutes 00:27:01.648 Power Cycles: 0 00:27:01.648 Power On Hours: 0 hours 00:27:01.648 Unsafe Shutdowns: 0 00:27:01.648 Unrecoverable Media Errors: 0 00:27:01.648 Lifetime Error Log Entries: 0 00:27:01.648 Warning Temperature Time: 0 minutes 00:27:01.648 Critical Temperature Time: 0 minutes 00:27:01.648 00:27:01.648 Number of Queues 00:27:01.648 ================ 00:27:01.648 Number of I/O Submission Queues: 127 00:27:01.648 Number of I/O Completion Queues: 127 00:27:01.648 00:27:01.648 Active Namespaces 00:27:01.648 ================= 00:27:01.648 Namespace ID:1 00:27:01.648 Error Recovery Timeout: Unlimited 00:27:01.648 Command Set Identifier: NVM (00h) 00:27:01.648 Deallocate: Supported 00:27:01.648 Deallocated/Unwritten Error: Not Supported 00:27:01.648 Deallocated Read Value: Unknown 00:27:01.648 Deallocate in Write Zeroes: Not Supported 00:27:01.648 Deallocated Guard Field: 0xFFFF 00:27:01.648 Flush: Supported 00:27:01.648 Reservation: Supported 00:27:01.648 Namespace Sharing Capabilities: Multiple Controllers 00:27:01.648 Size (in LBAs): 131072 (0GiB) 00:27:01.648 Capacity (in LBAs): 131072 (0GiB) 00:27:01.648 Utilization (in LBAs): 131072 (0GiB) 00:27:01.648 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:01.648 EUI64: ABCDEF0123456789 00:27:01.648 UUID: b1482f73-92f1-484d-a2c8-2a91ff9bb880 00:27:01.648 Thin Provisioning: Not Supported 00:27:01.648 Per-NS Atomic Units: Yes 00:27:01.648 Atomic Boundary Size (Normal): 0 00:27:01.648 Atomic Boundary Size (PFail): 0 00:27:01.648 Atomic Boundary Offset: 0 00:27:01.648 Maximum Single Source Range Length: 65535 00:27:01.648 Maximum Copy Length: 65535 00:27:01.648 Maximum Source Range Count: 1 00:27:01.648 NGUID/EUI64 Never Reused: No 00:27:01.648 Namespace Write Protected: No 00:27:01.648 Number of LBA Formats: 1 00:27:01.648 Current LBA Format: LBA Format #00 00:27:01.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:01.648 00:27:01.648 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:01.648 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.648 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.648 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:01.648 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.648 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:01.649 rmmod nvme_tcp 00:27:01.649 rmmod nvme_fabrics 00:27:01.649 rmmod nvme_keyring 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 671592 ']' 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 671592 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 671592 ']' 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 671592 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 671592 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 671592' 00:27:01.649 killing process with pid 671592 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 671592 00:27:01.649 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 671592 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.907 18:22:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.813 18:22:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:03.813 00:27:03.813 real 0m5.741s 00:27:03.813 user 0m4.803s 00:27:03.813 sys 0m2.084s 00:27:03.813 18:22:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.813 18:22:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:03.813 ************************************ 00:27:03.813 END TEST nvmf_identify 00:27:03.813 ************************************ 00:27:04.072 18:22:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:04.072 18:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:04.072 18:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:04.072 18:22:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.072 ************************************ 00:27:04.072 START TEST nvmf_perf 00:27:04.072 ************************************ 00:27:04.072 18:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:04.072 * Looking for test storage... 00:27:04.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:04.072 18:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:04.072 18:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:27:04.072 18:22:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:04.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.072 --rc genhtml_branch_coverage=1 00:27:04.072 --rc genhtml_function_coverage=1 00:27:04.072 --rc genhtml_legend=1 00:27:04.072 --rc geninfo_all_blocks=1 00:27:04.072 --rc geninfo_unexecuted_blocks=1 00:27:04.072 00:27:04.072 ' 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:04.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.072 --rc genhtml_branch_coverage=1 00:27:04.072 --rc genhtml_function_coverage=1 00:27:04.072 --rc genhtml_legend=1 00:27:04.072 --rc geninfo_all_blocks=1 00:27:04.072 --rc geninfo_unexecuted_blocks=1 00:27:04.072 00:27:04.072 ' 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:04.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.072 --rc genhtml_branch_coverage=1 00:27:04.072 --rc genhtml_function_coverage=1 00:27:04.072 --rc genhtml_legend=1 00:27:04.072 --rc geninfo_all_blocks=1 00:27:04.072 --rc geninfo_unexecuted_blocks=1 00:27:04.072 00:27:04.072 ' 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:04.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:04.072 --rc genhtml_branch_coverage=1 00:27:04.072 --rc genhtml_function_coverage=1 00:27:04.072 --rc genhtml_legend=1 00:27:04.072 --rc geninfo_all_blocks=1 00:27:04.072 --rc geninfo_unexecuted_blocks=1 00:27:04.072 00:27:04.072 ' 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.072 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:04.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:04.073 18:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:06.606 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:06.606 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:06.606 Found net devices under 0000:09:00.0: cvl_0_0 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:06.606 Found net devices under 0000:09:00.1: cvl_0_1 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:06.606 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:06.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:27:06.607 00:27:06.607 --- 10.0.0.2 ping statistics --- 00:27:06.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.607 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:27:06.607 00:27:06.607 --- 10.0.0.1 ping statistics --- 00:27:06.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.607 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=673689 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 673689 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 673689 ']' 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:06.607 [2024-11-26 18:22:54.357124] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:27:06.607 [2024-11-26 18:22:54.357204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.607 [2024-11-26 18:22:54.428119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:06.607 [2024-11-26 18:22:54.484331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.607 [2024-11-26 18:22:54.484399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.607 [2024-11-26 18:22:54.484412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.607 [2024-11-26 18:22:54.484422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.607 [2024-11-26 18:22:54.484431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.607 [2024-11-26 18:22:54.486008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.607 [2024-11-26 18:22:54.486066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.607 [2024-11-26 18:22:54.486118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:06.607 [2024-11-26 18:22:54.486121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.607 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:06.865 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.865 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:06.865 18:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:10.142 18:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:10.142 18:22:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:10.143 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:27:10.143 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:10.400 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:10.400 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:27:10.400 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:10.400 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:10.400 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:10.657 [2024-11-26 18:22:58.578565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:10.658 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:10.915 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:10.915 18:22:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:11.210 18:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:11.210 18:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:11.490 18:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.748 [2024-11-26 18:22:59.666592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.748 18:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:12.005 18:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:27:12.005 18:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:27:12.005 18:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:12.005 18:22:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:27:13.378 Initializing NVMe Controllers 00:27:13.378 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:27:13.378 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:27:13.378 Initialization complete. Launching workers. 00:27:13.378 ======================================================== 00:27:13.378 Latency(us) 00:27:13.378 Device Information : IOPS MiB/s Average min max 00:27:13.378 PCIE (0000:0b:00.0) NSID 1 from core 0: 82841.84 323.60 385.61 35.91 7357.99 00:27:13.378 ======================================================== 00:27:13.378 Total : 82841.84 323.60 385.61 35.91 7357.99 00:27:13.378 00:27:13.378 18:23:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:14.749 Initializing NVMe Controllers 00:27:14.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:14.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:14.749 Initialization complete. Launching workers. 00:27:14.749 ======================================================== 00:27:14.749 Latency(us) 00:27:14.749 Device Information : IOPS MiB/s Average min max 00:27:14.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 10869.69 138.78 45927.32 00:27:14.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17954.90 7947.37 47923.06 00:27:14.749 ======================================================== 00:27:14.749 Total : 148.00 0.58 13550.58 138.78 47923.06 00:27:14.749 00:27:14.749 18:23:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:16.122 Initializing NVMe Controllers 00:27:16.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:16.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:16.122 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:16.122 Initialization complete. Launching workers. 00:27:16.122 ======================================================== 00:27:16.122 Latency(us) 00:27:16.122 Device Information : IOPS MiB/s Average min max 00:27:16.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8041.49 31.41 3981.06 878.10 7748.74 00:27:16.122 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3925.75 15.33 8194.97 6799.62 15738.42 00:27:16.122 ======================================================== 00:27:16.122 Total : 11967.23 46.75 5363.40 878.10 15738.42 00:27:16.122 00:27:16.122 18:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:16.122 18:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:16.122 18:23:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:18.650 Initializing NVMe Controllers 00:27:18.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.650 Controller IO queue size 128, less than required. 00:27:18.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:18.650 Controller IO queue size 128, less than required. 00:27:18.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:18.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:18.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:18.650 Initialization complete. Launching workers. 00:27:18.650 ======================================================== 00:27:18.650 Latency(us) 00:27:18.650 Device Information : IOPS MiB/s Average min max 00:27:18.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1643.38 410.85 79407.11 48713.66 134806.25 00:27:18.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 591.46 147.86 223031.40 76910.28 325566.77 00:27:18.650 ======================================================== 00:27:18.650 Total : 2234.84 558.71 117417.74 48713.66 325566.77 00:27:18.650 00:27:18.650 18:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:18.650 No valid NVMe controllers or AIO or URING devices found 00:27:18.650 Initializing NVMe Controllers 00:27:18.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:18.650 Controller IO queue size 128, less than required. 00:27:18.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:18.650 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:18.650 Controller IO queue size 128, less than required. 00:27:18.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:18.650 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:18.650 WARNING: Some requested NVMe devices were skipped 00:27:18.650 18:23:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:21.178 Initializing NVMe Controllers 00:27:21.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:21.178 Controller IO queue size 128, less than required. 00:27:21.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:21.178 Controller IO queue size 128, less than required. 00:27:21.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:21.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:21.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:21.178 Initialization complete. Launching workers. 00:27:21.178 00:27:21.178 ==================== 00:27:21.178 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:27:21.178 TCP transport: 00:27:21.178 polls: 12235 00:27:21.178 idle_polls: 9067 00:27:21.178 sock_completions: 3168 00:27:21.178 nvme_completions: 5919 00:27:21.178 submitted_requests: 8868 00:27:21.178 queued_requests: 1 00:27:21.178 00:27:21.178 ==================== 00:27:21.178 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:27:21.178 TCP transport: 00:27:21.178 polls: 12374 00:27:21.178 idle_polls: 8962 00:27:21.178 sock_completions: 3412 00:27:21.178 nvme_completions: 6261 00:27:21.178 submitted_requests: 9372 00:27:21.178 queued_requests: 1 00:27:21.178 ======================================================== 00:27:21.178 Latency(us) 00:27:21.178 Device Information : IOPS MiB/s Average min max 00:27:21.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1479.46 369.87 88701.02 47837.92 136105.72 00:27:21.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1564.96 391.24 82061.76 48224.08 129160.23 00:27:21.178 ======================================================== 00:27:21.178 Total : 3044.42 761.10 85288.16 47837.92 136105.72 00:27:21.178 00:27:21.178 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:27:21.437 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.698 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:27:21.698 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:21.698 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:21.699 rmmod nvme_tcp 00:27:21.699 rmmod nvme_fabrics 00:27:21.699 rmmod nvme_keyring 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 673689 ']' 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 673689 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 673689 ']' 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 673689 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 673689 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 673689' 00:27:21.699 killing process with pid 673689 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 673689 00:27:21.699 18:23:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 673689 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.596 18:23:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.498 00:27:25.498 real 0m21.354s 00:27:25.498 user 1m5.749s 00:27:25.498 sys 0m5.578s 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:25.498 ************************************ 00:27:25.498 END TEST nvmf_perf 00:27:25.498 ************************************ 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.498 ************************************ 00:27:25.498 START TEST nvmf_fio_host 00:27:25.498 ************************************ 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:25.498 * Looking for test storage... 00:27:25.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:27:25.498 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.499 --rc genhtml_branch_coverage=1 00:27:25.499 --rc genhtml_function_coverage=1 00:27:25.499 --rc genhtml_legend=1 00:27:25.499 --rc geninfo_all_blocks=1 00:27:25.499 --rc geninfo_unexecuted_blocks=1 00:27:25.499 00:27:25.499 ' 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.499 --rc genhtml_branch_coverage=1 00:27:25.499 --rc genhtml_function_coverage=1 00:27:25.499 --rc genhtml_legend=1 00:27:25.499 --rc geninfo_all_blocks=1 00:27:25.499 --rc geninfo_unexecuted_blocks=1 00:27:25.499 00:27:25.499 ' 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.499 --rc genhtml_branch_coverage=1 00:27:25.499 --rc genhtml_function_coverage=1 00:27:25.499 --rc genhtml_legend=1 00:27:25.499 --rc geninfo_all_blocks=1 00:27:25.499 --rc geninfo_unexecuted_blocks=1 00:27:25.499 00:27:25.499 ' 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:25.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.499 --rc genhtml_branch_coverage=1 00:27:25.499 --rc genhtml_function_coverage=1 00:27:25.499 --rc genhtml_legend=1 00:27:25.499 --rc geninfo_all_blocks=1 00:27:25.499 --rc geninfo_unexecuted_blocks=1 00:27:25.499 00:27:25.499 ' 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.499 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.500 18:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:27.399 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:27.399 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:27.399 Found net devices under 0000:09:00.0: cvl_0_0 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:27.399 Found net devices under 0000:09:00.1: cvl_0_1 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.399 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.657 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.657 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.657 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:27.657 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.657 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.657 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.657 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:27.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:27:27.658 00:27:27.658 --- 10.0.0.2 ping statistics --- 00:27:27.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.658 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:27:27.658 00:27:27.658 --- 10.0.0.1 ping statistics --- 00:27:27.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.658 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=677543 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 677543 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 677543 ']' 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.658 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.658 [2024-11-26 18:23:15.597699] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:27:27.658 [2024-11-26 18:23:15.597777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.915 [2024-11-26 18:23:15.674626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:27.915 [2024-11-26 18:23:15.734485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.915 [2024-11-26 18:23:15.734536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.915 [2024-11-26 18:23:15.734565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.915 [2024-11-26 18:23:15.734578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.915 [2024-11-26 18:23:15.734588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.915 [2024-11-26 18:23:15.736236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.915 [2024-11-26 18:23:15.736321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:27.915 [2024-11-26 18:23:15.736351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:27.915 [2024-11-26 18:23:15.736354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.915 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:27.915 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:27:27.915 18:23:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:28.173 [2024-11-26 18:23:16.105483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.173 18:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:28.173 18:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:28.173 18:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.173 18:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:28.738 Malloc1 00:27:28.738 18:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:28.995 18:23:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:29.253 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.512 [2024-11-26 18:23:17.342381] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.512 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:29.770 18:23:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:30.027 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:30.027 fio-3.35 00:27:30.027 Starting 1 thread 00:27:32.552 00:27:32.552 test: (groupid=0, jobs=1): err= 0: pid=678014: Tue Nov 26 18:23:20 2024 00:27:32.552 read: IOPS=8794, BW=34.4MiB/s (36.0MB/s)(68.9MiB/2007msec) 00:27:32.552 slat (nsec): min=1967, max=192417, avg=2603.80, stdev=2338.05 00:27:32.552 clat (usec): min=2705, max=14298, avg=7923.36, stdev=655.47 00:27:32.552 lat (usec): min=2737, max=14301, avg=7925.96, stdev=655.33 00:27:32.552 clat percentiles (usec): 00:27:32.552 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:27:32.552 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7898], 60.00th=[ 8094], 00:27:32.552 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:27:32.552 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[12518], 99.95th=[13173], 00:27:32.552 | 99.99th=[14222] 00:27:32.552 bw ( KiB/s): min=34200, max=35872, per=99.95%, avg=35162.00, stdev=703.89, samples=4 00:27:32.552 iops : min= 8550, max= 8968, avg=8790.50, stdev=175.97, samples=4 00:27:32.552 write: IOPS=8802, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2007msec); 0 zone resets 00:27:32.552 slat (usec): min=2, max=161, avg= 2.70, stdev= 1.79 00:27:32.552 clat (usec): min=1748, max=12887, avg=6558.70, stdev=552.27 00:27:32.552 lat (usec): min=1759, max=12890, avg=6561.40, stdev=552.19 00:27:32.552 clat percentiles (usec): 00:27:32.552 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:27:32.552 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6718], 00:27:32.552 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:27:32.552 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[10814], 99.95th=[12387], 00:27:32.552 | 99.99th=[12911] 00:27:32.552 bw ( KiB/s): min=35008, max=35584, per=100.00%, avg=35220.00, stdev=255.12, samples=4 00:27:32.552 iops : min= 8752, max= 8896, avg=8805.00, stdev=63.78, samples=4 00:27:32.552 lat (msec) : 2=0.01%, 4=0.12%, 10=99.66%, 20=0.21% 00:27:32.552 cpu : usr=65.00%, sys=33.40%, ctx=95, majf=0, minf=31 00:27:32.552 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:32.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:32.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:32.552 issued rwts: total=17651,17667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:32.552 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:32.552 00:27:32.552 Run status group 0 (all jobs): 00:27:32.552 READ: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=68.9MiB (72.3MB), run=2007-2007msec 00:27:32.552 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.4MB), run=2007-2007msec 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:32.552 18:23:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:32.552 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:32.552 fio-3.35 00:27:32.552 Starting 1 thread 00:27:35.109 00:27:35.109 test: (groupid=0, jobs=1): err= 0: pid=678348: Tue Nov 26 18:23:22 2024 00:27:35.109 read: IOPS=8408, BW=131MiB/s (138MB/s)(263MiB/2005msec) 00:27:35.109 slat (usec): min=2, max=110, avg= 3.56, stdev= 1.53 00:27:35.109 clat (usec): min=2600, max=16198, avg=8745.21, stdev=2001.77 00:27:35.109 lat (usec): min=2604, max=16202, avg=8748.77, stdev=2001.78 00:27:35.109 clat percentiles (usec): 00:27:35.109 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 7046], 00:27:35.109 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9241], 00:27:35.109 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11338], 95.00th=[12125], 00:27:35.109 | 99.00th=[13698], 99.50th=[14353], 99.90th=[15533], 99.95th=[15664], 00:27:35.109 | 99.99th=[16188] 00:27:35.109 bw ( KiB/s): min=60320, max=74752, per=50.73%, avg=68256.00, stdev=6832.08, samples=4 00:27:35.109 iops : min= 3770, max= 4672, avg=4266.00, stdev=427.01, samples=4 00:27:35.109 write: IOPS=4880, BW=76.3MiB/s (80.0MB/s)(140MiB/1831msec); 0 zone resets 00:27:35.109 slat (usec): min=30, max=128, avg=33.13, stdev= 4.47 00:27:35.109 clat (usec): min=4940, max=20029, avg=11493.08, stdev=2005.82 00:27:35.109 lat (usec): min=4972, max=20062, avg=11526.21, stdev=2005.84 00:27:35.109 clat percentiles (usec): 00:27:35.109 | 1.00th=[ 7242], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:27:35.109 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:27:35.109 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14091], 95.00th=[15008], 00:27:35.109 | 99.00th=[16450], 99.50th=[17695], 99.90th=[19268], 99.95th=[19530], 00:27:35.109 | 99.99th=[20055] 00:27:35.109 bw ( KiB/s): min=62144, max=77472, per=90.77%, avg=70880.00, stdev=7452.68, samples=4 00:27:35.109 iops : min= 3884, max= 4842, avg=4430.00, stdev=465.79, samples=4 00:27:35.109 lat (msec) : 4=0.23%, 10=57.06%, 20=42.71%, 50=0.01% 00:27:35.109 cpu : usr=77.46%, sys=21.35%, ctx=52, majf=0, minf=53 00:27:35.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:35.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:35.109 issued rwts: total=16859,8936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:35.109 00:27:35.109 Run status group 0 (all jobs): 00:27:35.109 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=263MiB (276MB), run=2005-2005msec 00:27:35.109 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=140MiB (146MB), run=1831-1831msec 00:27:35.109 18:23:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.109 rmmod nvme_tcp 00:27:35.109 rmmod nvme_fabrics 00:27:35.109 rmmod nvme_keyring 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 677543 ']' 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 677543 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 677543 ']' 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 677543 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.109 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 677543 00:27:35.367 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.367 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.367 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 677543' 00:27:35.367 killing process with pid 677543 00:27:35.367 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 677543 00:27:35.367 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 677543 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.626 18:23:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.525 18:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.525 00:27:37.525 real 0m12.167s 00:27:37.525 user 0m36.409s 00:27:37.525 sys 0m3.833s 00:27:37.525 18:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.525 18:23:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.525 ************************************ 00:27:37.525 END TEST nvmf_fio_host 00:27:37.525 ************************************ 00:27:37.525 18:23:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:37.525 18:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:37.525 18:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.525 18:23:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.525 ************************************ 00:27:37.525 START TEST nvmf_failover 00:27:37.525 ************************************ 00:27:37.525 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:37.525 * Looking for test storage... 00:27:37.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.784 --rc genhtml_branch_coverage=1 00:27:37.784 --rc genhtml_function_coverage=1 00:27:37.784 --rc genhtml_legend=1 00:27:37.784 --rc geninfo_all_blocks=1 00:27:37.784 --rc geninfo_unexecuted_blocks=1 00:27:37.784 00:27:37.784 ' 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.784 --rc genhtml_branch_coverage=1 00:27:37.784 --rc genhtml_function_coverage=1 00:27:37.784 --rc genhtml_legend=1 00:27:37.784 --rc geninfo_all_blocks=1 00:27:37.784 --rc geninfo_unexecuted_blocks=1 00:27:37.784 00:27:37.784 ' 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.784 --rc genhtml_branch_coverage=1 00:27:37.784 --rc genhtml_function_coverage=1 00:27:37.784 --rc genhtml_legend=1 00:27:37.784 --rc geninfo_all_blocks=1 00:27:37.784 --rc geninfo_unexecuted_blocks=1 00:27:37.784 00:27:37.784 ' 00:27:37.784 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:37.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.784 --rc genhtml_branch_coverage=1 00:27:37.784 --rc genhtml_function_coverage=1 00:27:37.784 --rc genhtml_legend=1 00:27:37.784 --rc geninfo_all_blocks=1 00:27:37.784 --rc geninfo_unexecuted_blocks=1 00:27:37.784 00:27:37.785 ' 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:37.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.785 18:23:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:40.313 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.313 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:40.314 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:40.314 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:40.314 Found net devices under 0000:09:00.0: cvl_0_0 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:40.314 Found net devices under 0000:09:00.1: cvl_0_1 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.314 18:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:40.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:27:40.314 00:27:40.314 --- 10.0.0.2 ping statistics --- 00:27:40.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.314 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:27:40.314 00:27:40.314 --- 10.0.0.1 ping statistics --- 00:27:40.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.314 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:40.314 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=680604 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 680604 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 680604 ']' 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.315 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:40.315 [2024-11-26 18:23:28.132065] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:27:40.315 [2024-11-26 18:23:28.132160] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.315 [2024-11-26 18:23:28.207280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:40.315 [2024-11-26 18:23:28.266531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.315 [2024-11-26 18:23:28.266590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.315 [2024-11-26 18:23:28.266603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.315 [2024-11-26 18:23:28.266619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.315 [2024-11-26 18:23:28.266628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.315 [2024-11-26 18:23:28.268055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.315 [2024-11-26 18:23:28.268118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.315 [2024-11-26 18:23:28.268121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.573 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.573 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:40.573 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:40.573 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.573 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:40.573 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.573 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:40.830 [2024-11-26 18:23:28.720984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.830 18:23:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:41.087 Malloc0 00:27:41.087 18:23:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.653 18:23:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:41.653 18:23:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:42.218 [2024-11-26 18:23:29.956940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.218 18:23:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:42.475 [2024-11-26 18:23:30.249870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:42.475 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:42.734 [2024-11-26 18:23:30.530681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=680968 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 680968 /var/tmp/bdevperf.sock 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 680968 ']' 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:42.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.734 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:43.073 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.073 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:43.073 18:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:43.651 NVMe0n1 00:27:43.651 18:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:43.908 00:27:43.908 18:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=681106 00:27:43.908 18:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:43.908 18:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:27:44.842 18:23:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.408 [2024-11-26 18:23:33.149255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.408 [2024-11-26 18:23:33.149604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.149991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 [2024-11-26 18:23:33.150472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78460 is same with the state(6) to be set 00:27:45.409 18:23:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:27:48.688 18:23:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:48.947 00:27:48.947 18:23:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:49.204 [2024-11-26 18:23:37.000012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 [2024-11-26 18:23:37.000256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa78f10 is same with the state(6) to be set 00:27:49.204 18:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:27:52.484 18:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.484 [2024-11-26 18:23:40.335189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.484 18:23:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:27:53.416 18:23:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:53.674 [2024-11-26 18:23:41.661712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.661998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.674 [2024-11-26 18:23:41.662311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.675 [2024-11-26 18:23:41.662707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x93e280 is same with the state(6) to be set 00:27:53.932 18:23:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 681106 00:27:59.192 { 00:27:59.192 "results": [ 00:27:59.192 { 00:27:59.192 "job": "NVMe0n1", 00:27:59.192 "core_mask": "0x1", 00:27:59.192 "workload": "verify", 00:27:59.193 "status": "finished", 00:27:59.193 "verify_range": { 00:27:59.193 "start": 0, 00:27:59.193 "length": 16384 00:27:59.193 }, 00:27:59.193 "queue_depth": 128, 00:27:59.193 "io_size": 4096, 00:27:59.193 "runtime": 15.01467, 00:27:59.193 "iops": 8477.908605383935, 00:27:59.193 "mibps": 33.116830489780995, 00:27:59.193 "io_failed": 3933, 00:27:59.193 "io_timeout": 0, 00:27:59.193 "avg_latency_us": 14617.80414382651, 00:27:59.193 "min_latency_us": 543.0992592592593, 00:27:59.193 "max_latency_us": 16699.543703703705 00:27:59.193 } 00:27:59.193 ], 00:27:59.193 "core_count": 1 00:27:59.193 } 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 680968 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 680968 ']' 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 680968 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 680968 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 680968' 00:27:59.193 killing process with pid 680968 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 680968 00:27:59.193 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 680968 00:27:59.457 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:59.457 [2024-11-26 18:23:30.599900] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:27:59.457 [2024-11-26 18:23:30.599985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680968 ] 00:27:59.457 [2024-11-26 18:23:30.668892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.457 [2024-11-26 18:23:30.728462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.457 Running I/O for 15 seconds... 00:27:59.457 8510.00 IOPS, 33.24 MiB/s [2024-11-26T17:23:47.468Z] [2024-11-26 18:23:33.151028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.457 [2024-11-26 18:23:33.151069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.457 [2024-11-26 18:23:33.151097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.457 [2024-11-26 18:23:33.151113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.457 [2024-11-26 18:23:33.151130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.457 [2024-11-26 18:23:33.151145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.457 [2024-11-26 18:23:33.151161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.457 [2024-11-26 18:23:33.151176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.457 [2024-11-26 18:23:33.151192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.457 [2024-11-26 18:23:33.151206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.151977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.151992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.458 [2024-11-26 18:23:33.152283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.458 [2024-11-26 18:23:33.152318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.152485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.152516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.152546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.152971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.152985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.459 [2024-11-26 18:23:33.153485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.459 [2024-11-26 18:23:33.153544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.459 [2024-11-26 18:23:33.153559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.153977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.153991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.460 [2024-11-26 18:23:33.154453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.460 [2024-11-26 18:23:33.154501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 PRP1 0x0 PRP2 0x0 00:27:59.460 [2024-11-26 18:23:33.154515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.460 [2024-11-26 18:23:33.154545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.460 [2024-11-26 18:23:33.154556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:27:59.460 [2024-11-26 18:23:33.154570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.460 [2024-11-26 18:23:33.154604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.460 [2024-11-26 18:23:33.154615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82840 len:8 PRP1 0x0 PRP2 0x0 00:27:59.460 [2024-11-26 18:23:33.154634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.460 [2024-11-26 18:23:33.154659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.460 [2024-11-26 18:23:33.154671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82848 len:8 PRP1 0x0 PRP2 0x0 00:27:59.460 [2024-11-26 18:23:33.154684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.460 [2024-11-26 18:23:33.154709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.460 [2024-11-26 18:23:33.154720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82856 len:8 PRP1 0x0 PRP2 0x0 00:27:59.460 [2024-11-26 18:23:33.154733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.460 [2024-11-26 18:23:33.154757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.460 [2024-11-26 18:23:33.154769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:27:59.460 [2024-11-26 18:23:33.154782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.460 [2024-11-26 18:23:33.154795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.154807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.154818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.154831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.154844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.154855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.154867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.154880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.154893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.154904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.154915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82888 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.154929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.154942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.154953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.154964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82896 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.154981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82904 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82920 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.461 [2024-11-26 18:23:33.155427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.461 [2024-11-26 18:23:33.155439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 PRP1 0x0 PRP2 0x0 00:27:59.461 [2024-11-26 18:23:33.155452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155515] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:59.461 [2024-11-26 18:23:33.155553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.461 [2024-11-26 18:23:33.155572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.461 [2024-11-26 18:23:33.155604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.461 [2024-11-26 18:23:33.155632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.461 [2024-11-26 18:23:33.155659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:33.155673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:59.461 [2024-11-26 18:23:33.158968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:59.461 [2024-11-26 18:23:33.159006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ff570 (9): Bad file descriptor 00:27:59.461 [2024-11-26 18:23:33.184472] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:59.461 8309.50 IOPS, 32.46 MiB/s [2024-11-26T17:23:47.472Z] 8374.33 IOPS, 32.71 MiB/s [2024-11-26T17:23:47.472Z] 8393.50 IOPS, 32.79 MiB/s [2024-11-26T17:23:47.472Z] 8433.20 IOPS, 32.94 MiB/s [2024-11-26T17:23:47.472Z] [2024-11-26 18:23:37.000615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.461 [2024-11-26 18:23:37.000653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.000972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.461 [2024-11-26 18:23:37.000994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.461 [2024-11-26 18:23:37.001009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.462 [2024-11-26 18:23:37.001537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.462 [2024-11-26 18:23:37.001568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.462 [2024-11-26 18:23:37.001626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.462 [2024-11-26 18:23:37.001657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.462 [2024-11-26 18:23:37.001687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.462 [2024-11-26 18:23:37.001716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.462 [2024-11-26 18:23:37.001745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.001979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.001998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.002013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.002029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.462 [2024-11-26 18:23:37.002043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.462 [2024-11-26 18:23:37.002058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.002966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.002982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.463 [2024-11-26 18:23:37.002997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.463 [2024-11-26 18:23:37.003323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.463 [2024-11-26 18:23:37.003340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.003976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.003991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.464 [2024-11-26 18:23:37.004025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.464 [2024-11-26 18:23:37.004583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.464 [2024-11-26 18:23:37.004597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:37.004627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:37.004658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:37.004687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:37.004716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.465 [2024-11-26 18:23:37.004763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.465 [2024-11-26 18:23:37.004775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87952 len:8 PRP1 0x0 PRP2 0x0 00:27:59.465 [2024-11-26 18:23:37.004789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004855] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:59.465 [2024-11-26 18:23:37.004893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.465 [2024-11-26 18:23:37.004917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.465 [2024-11-26 18:23:37.004948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.465 [2024-11-26 18:23:37.004977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.004991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.465 [2024-11-26 18:23:37.005005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:37.005019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:59.465 [2024-11-26 18:23:37.008269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:59.465 [2024-11-26 18:23:37.008329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ff570 (9): Bad file descriptor 00:27:59.465 [2024-11-26 18:23:37.038490] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:59.465 8388.00 IOPS, 32.77 MiB/s [2024-11-26T17:23:47.476Z] 8407.00 IOPS, 32.84 MiB/s [2024-11-26T17:23:47.476Z] 8434.75 IOPS, 32.95 MiB/s [2024-11-26T17:23:47.476Z] 8429.00 IOPS, 32.93 MiB/s [2024-11-26T17:23:47.476Z] [2024-11-26 18:23:41.664508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.664977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.664992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.465 [2024-11-26 18:23:41.665271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.465 [2024-11-26 18:23:41.665301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.465 [2024-11-26 18:23:41.665371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.465 [2024-11-26 18:23:41.665386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.465 [2024-11-26 18:23:41.665401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.665978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.665994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.466 [2024-11-26 18:23:41.666460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.466 [2024-11-26 18:23:41.666474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.666504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.467 [2024-11-26 18:23:41.666781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.666812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.666842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.666872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.666902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.666932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.666962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.666977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.666992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.467 [2024-11-26 18:23:41.667701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.467 [2024-11-26 18:23:41.667716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.667731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.468 [2024-11-26 18:23:41.667746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.667761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.468 [2024-11-26 18:23:41.667775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.667806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.667823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.667837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.667855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.667868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.667879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20008 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.667892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.667906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.667918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.667929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20016 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.667947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.667962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.667973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.667985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20024 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.667998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20040 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20048 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20056 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20072 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20080 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20088 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20104 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20112 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20120 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20136 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20144 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20152 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.468 [2024-11-26 18:23:41.668913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20168 len:8 PRP1 0x0 PRP2 0x0 00:27:59.468 [2024-11-26 18:23:41.668926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.468 [2024-11-26 18:23:41.668940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.468 [2024-11-26 18:23:41.668951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.469 [2024-11-26 18:23:41.668962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20176 len:8 PRP1 0x0 PRP2 0x0 00:27:59.469 [2024-11-26 18:23:41.668976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.469 [2024-11-26 18:23:41.668990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.469 [2024-11-26 18:23:41.669001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.469 [2024-11-26 18:23:41.669013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20184 len:8 PRP1 0x0 PRP2 0x0 00:27:59.469 [2024-11-26 18:23:41.669026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.469 [2024-11-26 18:23:41.669096] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:59.469 [2024-11-26 18:23:41.669135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.469 [2024-11-26 18:23:41.669154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.469 [2024-11-26 18:23:41.669169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.469 [2024-11-26 18:23:41.669183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.469 [2024-11-26 18:23:41.669197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.469 [2024-11-26 18:23:41.669215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.469 [2024-11-26 18:23:41.669230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.469 [2024-11-26 18:23:41.669243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.469 [2024-11-26 18:23:41.669256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:59.469 [2024-11-26 18:23:41.669320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ff570 (9): Bad file descriptor 00:27:59.469 [2024-11-26 18:23:41.672566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:59.469 [2024-11-26 18:23:41.702912] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:59.469 8411.90 IOPS, 32.86 MiB/s [2024-11-26T17:23:47.480Z] 8441.55 IOPS, 32.97 MiB/s [2024-11-26T17:23:47.480Z] 8449.75 IOPS, 33.01 MiB/s [2024-11-26T17:23:47.480Z] 8462.23 IOPS, 33.06 MiB/s [2024-11-26T17:23:47.480Z] 8468.07 IOPS, 33.08 MiB/s [2024-11-26T17:23:47.480Z] 8477.73 IOPS, 33.12 MiB/s 00:27:59.469 Latency(us) 00:27:59.469 [2024-11-26T17:23:47.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.469 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:59.469 Verification LBA range: start 0x0 length 0x4000 00:27:59.469 NVMe0n1 : 15.01 8477.91 33.12 261.94 0.00 14617.80 543.10 16699.54 00:27:59.469 [2024-11-26T17:23:47.480Z] =================================================================================================================== 00:27:59.469 [2024-11-26T17:23:47.480Z] Total : 8477.91 33.12 261.94 0.00 14617.80 543.10 16699.54 00:27:59.469 Received shutdown signal, test time was about 15.000000 seconds 00:27:59.469 00:27:59.469 Latency(us) 00:27:59.469 [2024-11-26T17:23:47.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.469 [2024-11-26T17:23:47.480Z] =================================================================================================================== 00:27:59.469 [2024-11-26T17:23:47.480Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=682937 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 682937 /var/tmp/bdevperf.sock 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 682937 ']' 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.469 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:59.727 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.727 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:59.727 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:59.985 [2024-11-26 18:23:47.817380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:59.985 18:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:00.243 [2024-11-26 18:23:48.078070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:00.243 18:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:00.808 NVMe0n1 00:28:00.808 18:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:01.066 00:28:01.066 18:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:28:01.639 00:28:01.639 18:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.639 18:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:28:01.897 18:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:02.155 18:23:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:28:05.433 18:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:05.433 18:23:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:28:05.433 18:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=683620 00:28:05.433 18:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:05.433 18:23:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 683620 00:28:06.366 { 00:28:06.366 "results": [ 00:28:06.366 { 00:28:06.366 "job": "NVMe0n1", 00:28:06.366 "core_mask": "0x1", 00:28:06.366 "workload": "verify", 00:28:06.366 "status": "finished", 00:28:06.366 "verify_range": { 00:28:06.366 "start": 0, 00:28:06.366 "length": 16384 00:28:06.366 }, 00:28:06.366 "queue_depth": 128, 00:28:06.366 "io_size": 4096, 00:28:06.366 "runtime": 1.00524, 00:28:06.366 "iops": 8475.587919302852, 00:28:06.366 "mibps": 33.10776530977677, 00:28:06.366 "io_failed": 0, 00:28:06.366 "io_timeout": 0, 00:28:06.366 "avg_latency_us": 15035.958752217006, 00:28:06.366 "min_latency_us": 1844.717037037037, 00:28:06.366 "max_latency_us": 13010.10962962963 00:28:06.366 } 00:28:06.366 ], 00:28:06.366 "core_count": 1 00:28:06.366 } 00:28:06.366 18:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:06.366 [2024-11-26 18:23:47.336180] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:28:06.366 [2024-11-26 18:23:47.336275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682937 ] 00:28:06.366 [2024-11-26 18:23:47.405879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.366 [2024-11-26 18:23:47.462845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.366 [2024-11-26 18:23:49.893658] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:06.366 [2024-11-26 18:23:49.893739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.366 [2024-11-26 18:23:49.893762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.366 [2024-11-26 18:23:49.893794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.366 [2024-11-26 18:23:49.893809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.366 [2024-11-26 18:23:49.893824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.366 [2024-11-26 18:23:49.893838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.366 [2024-11-26 18:23:49.893851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.366 [2024-11-26 18:23:49.893865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.366 [2024-11-26 18:23:49.893878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:28:06.366 [2024-11-26 18:23:49.893927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:28:06.366 [2024-11-26 18:23:49.893959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1996570 (9): Bad file descriptor 00:28:06.366 [2024-11-26 18:23:49.904534] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:28:06.366 Running I/O for 1 seconds... 00:28:06.366 8392.00 IOPS, 32.78 MiB/s 00:28:06.366 Latency(us) 00:28:06.366 [2024-11-26T17:23:54.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.366 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:06.366 Verification LBA range: start 0x0 length 0x4000 00:28:06.366 NVMe0n1 : 1.01 8475.59 33.11 0.00 0.00 15035.96 1844.72 13010.11 00:28:06.366 [2024-11-26T17:23:54.377Z] =================================================================================================================== 00:28:06.366 [2024-11-26T17:23:54.377Z] Total : 8475.59 33.11 0.00 0.00 15035.96 1844.72 13010.11 00:28:06.367 18:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:06.367 18:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:28:06.624 18:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.881 18:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:06.881 18:23:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:28:07.446 18:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:07.446 18:23:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:28:10.722 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.722 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 682937 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 682937 ']' 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 682937 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 682937 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 682937' 00:28:10.980 killing process with pid 682937 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 682937 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 682937 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:28:10.980 18:23:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.238 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:11.238 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:11.238 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:11.495 rmmod nvme_tcp 00:28:11.495 rmmod nvme_fabrics 00:28:11.495 rmmod nvme_keyring 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 680604 ']' 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 680604 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 680604 ']' 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 680604 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 680604 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 680604' 00:28:11.495 killing process with pid 680604 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 680604 00:28:11.495 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 680604 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.753 18:23:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.655 18:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:13.914 00:28:13.914 real 0m36.186s 00:28:13.914 user 2m7.694s 00:28:13.914 sys 0m5.928s 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:28:13.914 ************************************ 00:28:13.914 END TEST nvmf_failover 00:28:13.914 ************************************ 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.914 ************************************ 00:28:13.914 START TEST nvmf_host_discovery 00:28:13.914 ************************************ 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:13.914 * Looking for test storage... 00:28:13.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:13.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.914 --rc genhtml_branch_coverage=1 00:28:13.914 --rc genhtml_function_coverage=1 00:28:13.914 --rc genhtml_legend=1 00:28:13.914 --rc geninfo_all_blocks=1 00:28:13.914 --rc geninfo_unexecuted_blocks=1 00:28:13.914 00:28:13.914 ' 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:13.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.914 --rc genhtml_branch_coverage=1 00:28:13.914 --rc genhtml_function_coverage=1 00:28:13.914 --rc genhtml_legend=1 00:28:13.914 --rc geninfo_all_blocks=1 00:28:13.914 --rc geninfo_unexecuted_blocks=1 00:28:13.914 00:28:13.914 ' 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:13.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.914 --rc genhtml_branch_coverage=1 00:28:13.914 --rc genhtml_function_coverage=1 00:28:13.914 --rc genhtml_legend=1 00:28:13.914 --rc geninfo_all_blocks=1 00:28:13.914 --rc geninfo_unexecuted_blocks=1 00:28:13.914 00:28:13.914 ' 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:13.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:13.914 --rc genhtml_branch_coverage=1 00:28:13.914 --rc genhtml_function_coverage=1 00:28:13.914 --rc genhtml_legend=1 00:28:13.914 --rc geninfo_all_blocks=1 00:28:13.914 --rc geninfo_unexecuted_blocks=1 00:28:13.914 00:28:13.914 ' 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.914 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.915 18:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:16.478 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:16.478 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:16.478 Found net devices under 0000:09:00.0: cvl_0_0 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.478 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:16.479 Found net devices under 0000:09:00.1: cvl_0_1 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.479 18:24:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:16.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:28:16.479 00:28:16.479 --- 10.0.0.2 ping statistics --- 00:28:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.479 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:28:16.479 00:28:16.479 --- 10.0.0.1 ping statistics --- 00:28:16.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.479 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=686344 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 686344 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 686344 ']' 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.479 [2024-11-26 18:24:04.139265] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:28:16.479 [2024-11-26 18:24:04.139379] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.479 [2024-11-26 18:24:04.218053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.479 [2024-11-26 18:24:04.280610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.479 [2024-11-26 18:24:04.280661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.479 [2024-11-26 18:24:04.280676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.479 [2024-11-26 18:24:04.280688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.479 [2024-11-26 18:24:04.280699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.479 [2024-11-26 18:24:04.281339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.479 [2024-11-26 18:24:04.418832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.479 [2024-11-26 18:24:04.427001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.479 null0 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.479 null1 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=686478 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 686478 /tmp/host.sock 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 686478 ']' 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:16.479 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:16.479 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.480 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.766 [2024-11-26 18:24:04.500512] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:28:16.766 [2024-11-26 18:24:04.500592] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686478 ] 00:28:16.766 [2024-11-26 18:24:04.566828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.766 [2024-11-26 18:24:04.627435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.766 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.024 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:17.025 18:24:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 [2024-11-26 18:24:05.024642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.025 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:17.283 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:28:17.284 18:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:17.850 [2024-11-26 18:24:05.806401] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:17.850 [2024-11-26 18:24:05.806436] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:17.850 [2024-11-26 18:24:05.806459] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:18.107 [2024-11-26 18:24:05.892776] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:18.107 [2024-11-26 18:24:06.075009] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:28:18.107 [2024-11-26 18:24:06.076139] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2489fe0:1 started. 00:28:18.107 [2024-11-26 18:24:06.078139] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:18.108 [2024-11-26 18:24:06.078165] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:18.108 [2024-11-26 18:24:06.084940] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2489fe0 was disconnected and freed. delete nvme_qpair. 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.366 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.626 [2024-11-26 18:24:06.388270] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x248a360:1 started. 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:18.626 [2024-11-26 18:24:06.395854] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x248a360 was disconnected and freed. delete nvme_qpair. 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.626 [2024-11-26 18:24:06.477230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:18.626 [2024-11-26 18:24:06.477678] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:18.626 [2024-11-26 18:24:06.477710] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:18.626 [2024-11-26 18:24:06.563999] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:28:18.626 18:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:18.885 [2024-11-26 18:24:06.864664] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:28:18.885 [2024-11-26 18:24:06.864717] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:18.885 [2024-11-26 18:24:06.864736] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:18.885 [2024-11-26 18:24:06.864749] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:19.821 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.822 [2024-11-26 18:24:07.689083] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:19.822 [2024-11-26 18:24:07.689119] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:19.822 [2024-11-26 18:24:07.689883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.822 [2024-11-26 18:24:07.689915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.822 [2024-11-26 18:24:07.689932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.822 [2024-11-26 18:24:07.689946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.822 [2024-11-26 18:24:07.689960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.822 [2024-11-26 18:24:07.689974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.822 [2024-11-26 18:24:07.689987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:19.822 [2024-11-26 18:24:07.690000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:19.822 [2024-11-26 18:24:07.690013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:19.822 [2024-11-26 18:24:07.699872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.822 [2024-11-26 18:24:07.709913] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.822 [2024-11-26 18:24:07.709935] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.822 [2024-11-26 18:24:07.709945] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.709953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.822 [2024-11-26 18:24:07.710008] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.710172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.822 [2024-11-26 18:24:07.710202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.822 [2024-11-26 18:24:07.710219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.822 [2024-11-26 18:24:07.710242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.822 [2024-11-26 18:24:07.710276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.822 [2024-11-26 18:24:07.710300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.822 [2024-11-26 18:24:07.710328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.822 [2024-11-26 18:24:07.710342] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.822 [2024-11-26 18:24:07.710353] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.822 [2024-11-26 18:24:07.710361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.822 [2024-11-26 18:24:07.720040] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.822 [2024-11-26 18:24:07.720061] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.822 [2024-11-26 18:24:07.720075] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.720082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.822 [2024-11-26 18:24:07.720120] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.720317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.822 [2024-11-26 18:24:07.720344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.822 [2024-11-26 18:24:07.720360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.822 [2024-11-26 18:24:07.720389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.822 [2024-11-26 18:24:07.720422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.822 [2024-11-26 18:24:07.720439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.822 [2024-11-26 18:24:07.720453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.822 [2024-11-26 18:24:07.720465] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.822 [2024-11-26 18:24:07.720474] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.822 [2024-11-26 18:24:07.720482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.822 [2024-11-26 18:24:07.730154] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.822 [2024-11-26 18:24:07.730175] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.822 [2024-11-26 18:24:07.730183] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.730190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.822 [2024-11-26 18:24:07.730230] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.730374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.822 [2024-11-26 18:24:07.730402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.822 [2024-11-26 18:24:07.730418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.822 [2024-11-26 18:24:07.730441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.822 [2024-11-26 18:24:07.730506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.822 [2024-11-26 18:24:07.730527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.822 [2024-11-26 18:24:07.730540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.822 [2024-11-26 18:24:07.730553] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.822 [2024-11-26 18:24:07.730562] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.822 [2024-11-26 18:24:07.730569] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:19.822 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:19.822 [2024-11-26 18:24:07.740264] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.822 [2024-11-26 18:24:07.740360] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.822 [2024-11-26 18:24:07.740377] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.740385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.822 [2024-11-26 18:24:07.740414] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.740515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.822 [2024-11-26 18:24:07.740544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.822 [2024-11-26 18:24:07.740560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.822 [2024-11-26 18:24:07.740583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.822 [2024-11-26 18:24:07.740626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.822 [2024-11-26 18:24:07.740644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.822 [2024-11-26 18:24:07.740658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.822 [2024-11-26 18:24:07.740678] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.822 [2024-11-26 18:24:07.740687] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.822 [2024-11-26 18:24:07.740694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.822 [2024-11-26 18:24:07.750448] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.822 [2024-11-26 18:24:07.750471] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.822 [2024-11-26 18:24:07.750481] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.750489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.822 [2024-11-26 18:24:07.750515] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.822 [2024-11-26 18:24:07.750630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.822 [2024-11-26 18:24:07.750665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.822 [2024-11-26 18:24:07.750682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.822 [2024-11-26 18:24:07.750704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.822 [2024-11-26 18:24:07.750736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.822 [2024-11-26 18:24:07.750759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.822 [2024-11-26 18:24:07.750773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.822 [2024-11-26 18:24:07.750786] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.823 [2024-11-26 18:24:07.750795] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.823 [2024-11-26 18:24:07.750803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.823 [2024-11-26 18:24:07.760549] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.823 [2024-11-26 18:24:07.760571] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.823 [2024-11-26 18:24:07.760581] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.760604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.823 [2024-11-26 18:24:07.760628] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.760749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.823 [2024-11-26 18:24:07.760776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.823 [2024-11-26 18:24:07.760792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.823 [2024-11-26 18:24:07.760814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.823 [2024-11-26 18:24:07.760847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.823 [2024-11-26 18:24:07.760864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.823 [2024-11-26 18:24:07.760878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.823 [2024-11-26 18:24:07.760890] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.823 [2024-11-26 18:24:07.760898] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.823 [2024-11-26 18:24:07.760906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:19.823 [2024-11-26 18:24:07.770681] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.823 [2024-11-26 18:24:07.770703] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.823 [2024-11-26 18:24:07.770712] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.770720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.823 [2024-11-26 18:24:07.770763] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.770878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.823 [2024-11-26 18:24:07.770907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.823 [2024-11-26 18:24:07.770923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.823 [2024-11-26 18:24:07.770945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.823 [2024-11-26 18:24:07.770985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.823 [2024-11-26 18:24:07.771005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.823 [2024-11-26 18:24:07.771018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.823 [2024-11-26 18:24:07.771030] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.823 [2024-11-26 18:24:07.771039] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.823 [2024-11-26 18:24:07.771047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.823 [2024-11-26 18:24:07.780797] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.823 [2024-11-26 18:24:07.780821] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.823 [2024-11-26 18:24:07.780831] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.780838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.823 [2024-11-26 18:24:07.780879] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.823 [2024-11-26 18:24:07.781008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.823 [2024-11-26 18:24:07.781037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.823 [2024-11-26 18:24:07.781053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.823 [2024-11-26 18:24:07.781076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.823 [2024-11-26 18:24:07.781109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.823 [2024-11-26 18:24:07.781132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.823 [2024-11-26 18:24:07.781147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.823 [2024-11-26 18:24:07.781167] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.823 [2024-11-26 18:24:07.781178] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.823 [2024-11-26 18:24:07.781186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.823 [2024-11-26 18:24:07.790913] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.823 [2024-11-26 18:24:07.790935] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.823 [2024-11-26 18:24:07.790944] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.790951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.823 [2024-11-26 18:24:07.790988] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.791199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.823 [2024-11-26 18:24:07.791227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.823 [2024-11-26 18:24:07.791243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.823 [2024-11-26 18:24:07.791265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.823 [2024-11-26 18:24:07.791297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.823 [2024-11-26 18:24:07.791325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.823 [2024-11-26 18:24:07.791340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.823 [2024-11-26 18:24:07.791352] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.823 [2024-11-26 18:24:07.791361] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.823 [2024-11-26 18:24:07.791369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.823 [2024-11-26 18:24:07.801022] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.823 [2024-11-26 18:24:07.801043] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.823 [2024-11-26 18:24:07.801052] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.801059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.823 [2024-11-26 18:24:07.801098] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.801252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.823 [2024-11-26 18:24:07.801279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.823 [2024-11-26 18:24:07.801295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.823 [2024-11-26 18:24:07.801328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.823 [2024-11-26 18:24:07.801370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.823 [2024-11-26 18:24:07.801388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.823 [2024-11-26 18:24:07.801401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.823 [2024-11-26 18:24:07.801414] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.823 [2024-11-26 18:24:07.801423] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.823 [2024-11-26 18:24:07.801431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:28:19.823 18:24:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:28:19.823 [2024-11-26 18:24:07.811131] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:19.823 [2024-11-26 18:24:07.811151] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:19.823 [2024-11-26 18:24:07.811160] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.811167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:19.823 [2024-11-26 18:24:07.811203] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:19.823 [2024-11-26 18:24:07.811376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:19.823 [2024-11-26 18:24:07.811405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x245c0e0 with addr=10.0.0.2, port=4420 00:28:19.823 [2024-11-26 18:24:07.811421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245c0e0 is same with the state(6) to be set 00:28:19.823 [2024-11-26 18:24:07.811443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245c0e0 (9): Bad file descriptor 00:28:19.823 [2024-11-26 18:24:07.811475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:19.823 [2024-11-26 18:24:07.811492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:19.823 [2024-11-26 18:24:07.811506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:19.823 [2024-11-26 18:24:07.811518] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:19.823 [2024-11-26 18:24:07.811527] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:19.823 [2024-11-26 18:24:07.811535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:19.823 [2024-11-26 18:24:07.814484] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:19.823 [2024-11-26 18:24:07.814514] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:21.196 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:21.197 18:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.197 18:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:28:21.197 18:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:28:21.197 18:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:28:21.197 18:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:28:21.197 18:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:21.197 18:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.197 18:24:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.128 [2024-11-26 18:24:10.027181] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:22.128 [2024-11-26 18:24:10.027218] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:22.128 [2024-11-26 18:24:10.027255] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:22.128 [2024-11-26 18:24:10.113701] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:22.385 [2024-11-26 18:24:10.179632] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:28:22.385 [2024-11-26 18:24:10.180541] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2483790:1 started. 00:28:22.385 [2024-11-26 18:24:10.182875] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:22.385 [2024-11-26 18:24:10.182932] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.385 [2024-11-26 18:24:10.185917] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2483790 was disconnected and freed. delete nvme_qpair. 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.385 request: 00:28:22.385 { 00:28:22.385 "name": "nvme", 00:28:22.385 "trtype": "tcp", 00:28:22.385 "traddr": "10.0.0.2", 00:28:22.385 "adrfam": "ipv4", 00:28:22.385 "trsvcid": "8009", 00:28:22.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:22.385 "wait_for_attach": true, 00:28:22.385 "method": "bdev_nvme_start_discovery", 00:28:22.385 "req_id": 1 00:28:22.385 } 00:28:22.385 Got JSON-RPC error response 00:28:22.385 response: 00:28:22.385 { 00:28:22.385 "code": -17, 00:28:22.385 "message": "File exists" 00:28:22.385 } 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:28:22.385 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.386 request: 00:28:22.386 { 00:28:22.386 "name": "nvme_second", 00:28:22.386 "trtype": "tcp", 00:28:22.386 "traddr": "10.0.0.2", 00:28:22.386 "adrfam": "ipv4", 00:28:22.386 "trsvcid": "8009", 00:28:22.386 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:22.386 "wait_for_attach": true, 00:28:22.386 "method": "bdev_nvme_start_discovery", 00:28:22.386 "req_id": 1 00:28:22.386 } 00:28:22.386 Got JSON-RPC error response 00:28:22.386 response: 00:28:22.386 { 00:28:22.386 "code": -17, 00:28:22.386 "message": "File exists" 00:28:22.386 } 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.386 18:24:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:23.756 [2024-11-26 18:24:11.394325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.756 [2024-11-26 18:24:11.394388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24946f0 with addr=10.0.0.2, port=8010 00:28:23.756 [2024-11-26 18:24:11.394420] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:23.756 [2024-11-26 18:24:11.394435] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:23.756 [2024-11-26 18:24:11.394448] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:24.687 [2024-11-26 18:24:12.396793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.687 [2024-11-26 18:24:12.396859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24946f0 with addr=10.0.0.2, port=8010 00:28:24.687 [2024-11-26 18:24:12.396889] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:24.687 [2024-11-26 18:24:12.396912] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:24.687 [2024-11-26 18:24:12.396926] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:25.621 [2024-11-26 18:24:13.398965] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:25.621 request: 00:28:25.621 { 00:28:25.621 "name": "nvme_second", 00:28:25.621 "trtype": "tcp", 00:28:25.621 "traddr": "10.0.0.2", 00:28:25.621 "adrfam": "ipv4", 00:28:25.621 "trsvcid": "8010", 00:28:25.621 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:25.621 "wait_for_attach": false, 00:28:25.621 "attach_timeout_ms": 3000, 00:28:25.621 "method": "bdev_nvme_start_discovery", 00:28:25.621 "req_id": 1 00:28:25.621 } 00:28:25.621 Got JSON-RPC error response 00:28:25.621 response: 00:28:25.621 { 00:28:25.621 "code": -110, 00:28:25.621 "message": "Connection timed out" 00:28:25.621 } 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 686478 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:25.621 rmmod nvme_tcp 00:28:25.621 rmmod nvme_fabrics 00:28:25.621 rmmod nvme_keyring 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 686344 ']' 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 686344 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 686344 ']' 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 686344 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 686344 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 686344' 00:28:25.621 killing process with pid 686344 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 686344 00:28:25.621 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 686344 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.881 18:24:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:28.420 00:28:28.420 real 0m14.137s 00:28:28.420 user 0m20.757s 00:28:28.420 sys 0m2.877s 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:28:28.420 ************************************ 00:28:28.420 END TEST nvmf_host_discovery 00:28:28.420 ************************************ 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.420 ************************************ 00:28:28.420 START TEST nvmf_host_multipath_status 00:28:28.420 ************************************ 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:28:28.420 * Looking for test storage... 00:28:28.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:28:28.420 18:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:28.420 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:28.420 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.420 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.420 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.420 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.420 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.420 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:28.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.421 --rc genhtml_branch_coverage=1 00:28:28.421 --rc genhtml_function_coverage=1 00:28:28.421 --rc genhtml_legend=1 00:28:28.421 --rc geninfo_all_blocks=1 00:28:28.421 --rc geninfo_unexecuted_blocks=1 00:28:28.421 00:28:28.421 ' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:28.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.421 --rc genhtml_branch_coverage=1 00:28:28.421 --rc genhtml_function_coverage=1 00:28:28.421 --rc genhtml_legend=1 00:28:28.421 --rc geninfo_all_blocks=1 00:28:28.421 --rc geninfo_unexecuted_blocks=1 00:28:28.421 00:28:28.421 ' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:28.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.421 --rc genhtml_branch_coverage=1 00:28:28.421 --rc genhtml_function_coverage=1 00:28:28.421 --rc genhtml_legend=1 00:28:28.421 --rc geninfo_all_blocks=1 00:28:28.421 --rc geninfo_unexecuted_blocks=1 00:28:28.421 00:28:28.421 ' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:28.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.421 --rc genhtml_branch_coverage=1 00:28:28.421 --rc genhtml_function_coverage=1 00:28:28.421 --rc genhtml_legend=1 00:28:28.421 --rc geninfo_all_blocks=1 00:28:28.421 --rc geninfo_unexecuted_blocks=1 00:28:28.421 00:28:28.421 ' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:28.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:28.421 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:28.422 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:28.422 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.422 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.422 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.422 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:28.422 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:28.422 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.422 18:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:30.322 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:30.322 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.322 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:30.323 Found net devices under 0000:09:00.0: cvl_0_0 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:30.323 Found net devices under 0000:09:00.1: cvl_0_1 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.323 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:28:30.582 00:28:30.582 --- 10.0.0.2 ping statistics --- 00:28:30.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.582 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:30.582 00:28:30.582 --- 10.0.0.1 ping statistics --- 00:28:30.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.582 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=690162 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 690162 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 690162 ']' 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:30.582 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:30.582 [2024-11-26 18:24:18.422524] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:28:30.582 [2024-11-26 18:24:18.422620] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.582 [2024-11-26 18:24:18.493234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:30.582 [2024-11-26 18:24:18.550545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.582 [2024-11-26 18:24:18.550612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.582 [2024-11-26 18:24:18.550625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.582 [2024-11-26 18:24:18.550636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.582 [2024-11-26 18:24:18.550658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.582 [2024-11-26 18:24:18.552130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.582 [2024-11-26 18:24:18.552136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.840 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:30.840 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:30.840 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:30.840 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:30.840 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:30.840 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.840 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=690162 00:28:30.840 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:31.098 [2024-11-26 18:24:18.933362] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.098 18:24:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:31.369 Malloc0 00:28:31.369 18:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:31.630 18:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:31.887 18:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.167 [2024-11-26 18:24:20.032719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.167 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:32.424 [2024-11-26 18:24:20.317705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=690447 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 690447 /var/tmp/bdevperf.sock 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 690447 ']' 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:32.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.424 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:32.682 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.682 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:28:32.682 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:32.941 18:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:33.507 Nvme0n1 00:28:33.507 18:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:34.072 Nvme0n1 00:28:34.072 18:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:28:34.072 18:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:35.970 18:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:28:35.970 18:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:36.228 18:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:36.486 18:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:28:37.858 18:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:28:37.858 18:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:37.858 18:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.858 18:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:37.858 18:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:37.858 18:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:37.858 18:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:37.858 18:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:38.115 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:38.116 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:38.116 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.116 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:38.373 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.373 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:38.373 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.373 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:38.631 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.631 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:38.631 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.631 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:38.889 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:38.889 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:38.889 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:38.889 18:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:39.148 18:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:39.148 18:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:28:39.148 18:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:39.713 18:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:39.971 18:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:28:40.906 18:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:28:40.906 18:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:40.906 18:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:40.906 18:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:41.164 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:41.164 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:41.164 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.164 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:41.423 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.423 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:41.423 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.423 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:41.680 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.680 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:41.680 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.680 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:41.938 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:41.938 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:41.938 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:41.938 18:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:42.196 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.196 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:42.196 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:42.196 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:42.455 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:42.455 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:28:42.455 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:42.714 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:28:42.972 18:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:28:44.022 18:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:28:44.022 18:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:44.022 18:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.022 18:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:44.280 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:44.281 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:44.281 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.281 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:44.539 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:44.539 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:44.539 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:44.539 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:45.105 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.105 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:45.105 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.105 18:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:45.105 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.105 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:45.105 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.105 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:45.363 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.363 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:45.363 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:45.363 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:45.929 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:45.929 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:28:45.929 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:45.929 18:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:46.187 18:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:28:47.563 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:28:47.563 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:47.563 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.563 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:47.563 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:47.563 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:47.563 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.563 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:47.821 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:47.821 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:47.821 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:47.821 18:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:48.078 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.078 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:48.078 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.078 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:48.336 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.336 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:48.336 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.337 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:48.594 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:48.594 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:48.594 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:48.594 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:49.161 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:49.161 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:28:49.161 18:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:49.161 18:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:49.726 18:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:28:50.661 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:28:50.661 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:50.661 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.661 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:50.919 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:50.919 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:50.919 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:50.919 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:51.177 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:51.177 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:51.177 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.177 18:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:51.434 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.434 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:51.434 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.434 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:51.691 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:51.691 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:51.691 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.691 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:51.949 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:51.949 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:51.949 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:51.949 18:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:52.206 18:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:52.206 18:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:28:52.206 18:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:52.463 18:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:52.722 18:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:28:53.655 18:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:28:53.655 18:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:28:53.655 18:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.655 18:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:53.914 18:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:53.914 18:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:53.914 18:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:53.914 18:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:54.171 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.171 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:54.171 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.171 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:54.738 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.738 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:54.738 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.738 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:54.738 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:54.738 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:28:54.738 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.738 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:54.997 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:54.997 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:54.997 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:54.997 18:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:55.563 18:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:55.563 18:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:28:55.563 18:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:28:55.563 18:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:28:55.821 18:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:56.387 18:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:28:57.319 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:28:57.319 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:57.319 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.319 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:57.577 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.577 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:28:57.577 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.577 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:57.835 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:57.835 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:57.835 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:57.835 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:58.092 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.092 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:58.092 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.092 18:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:58.350 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.350 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:58.350 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.350 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:58.609 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.609 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:28:58.609 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:58.609 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:58.866 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:58.866 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:28:58.866 18:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:59.124 18:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:59.382 18:24:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:29:00.762 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:29:00.762 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:29:00.762 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.762 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:00.762 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:00.762 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:00.762 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:00.762 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:01.020 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.020 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:01.020 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.021 18:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:01.279 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.279 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:01.279 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.279 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:01.536 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.536 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:01.536 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.536 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:01.794 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:01.794 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:01.794 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:01.795 18:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:02.051 18:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:02.051 18:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:29:02.051 18:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:02.309 18:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:29:02.566 18:24:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:29:03.940 18:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:29:03.940 18:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:03.940 18:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:03.940 18:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:03.940 18:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:03.940 18:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:29:03.940 18:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:03.940 18:24:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:04.198 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:04.198 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:04.198 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.198 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:04.456 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:04.456 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:04.456 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.456 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:04.714 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:04.714 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:04.714 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.714 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:04.971 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:04.971 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:29:04.971 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:04.971 18:24:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:05.537 18:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:05.537 18:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:29:05.537 18:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:05.537 18:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:05.795 18:24:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:29:07.168 18:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:29:07.168 18:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:29:07.168 18:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.168 18:24:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:29:07.168 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:07.168 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:29:07.168 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.168 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:29:07.426 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:07.426 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:29:07.426 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.426 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:29:07.684 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:07.684 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:29:07.684 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.684 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:29:07.942 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:07.942 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:29:07.942 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:07.942 18:24:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:29:08.200 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:29:08.200 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:29:08.200 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:29:08.200 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:29:08.459 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:29:08.459 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 690447 00:29:08.459 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 690447 ']' 00:29:08.459 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 690447 00:29:08.459 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690447 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690447' 00:29:08.775 killing process with pid 690447 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 690447 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 690447 00:29:08.775 { 00:29:08.775 "results": [ 00:29:08.775 { 00:29:08.775 "job": "Nvme0n1", 00:29:08.775 "core_mask": "0x4", 00:29:08.775 "workload": "verify", 00:29:08.775 "status": "terminated", 00:29:08.775 "verify_range": { 00:29:08.775 "start": 0, 00:29:08.775 "length": 16384 00:29:08.775 }, 00:29:08.775 "queue_depth": 128, 00:29:08.775 "io_size": 4096, 00:29:08.775 "runtime": 34.400927, 00:29:08.775 "iops": 7929.495620859287, 00:29:08.775 "mibps": 30.97459226898159, 00:29:08.775 "io_failed": 0, 00:29:08.775 "io_timeout": 0, 00:29:08.775 "avg_latency_us": 16115.531793805229, 00:29:08.775 "min_latency_us": 371.6740740740741, 00:29:08.775 "max_latency_us": 4026531.84 00:29:08.775 } 00:29:08.775 ], 00:29:08.775 "core_count": 1 00:29:08.775 } 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 690447 00:29:08.775 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:08.775 [2024-11-26 18:24:20.386229] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:29:08.775 [2024-11-26 18:24:20.386351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690447 ] 00:29:08.775 [2024-11-26 18:24:20.456907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.775 [2024-11-26 18:24:20.518927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.775 Running I/O for 90 seconds... 00:29:08.775 8369.00 IOPS, 32.69 MiB/s [2024-11-26T17:24:56.786Z] 8402.50 IOPS, 32.82 MiB/s [2024-11-26T17:24:56.786Z] 8406.00 IOPS, 32.84 MiB/s [2024-11-26T17:24:56.786Z] 8411.25 IOPS, 32.86 MiB/s [2024-11-26T17:24:56.786Z] 8432.80 IOPS, 32.94 MiB/s [2024-11-26T17:24:56.786Z] 8430.50 IOPS, 32.93 MiB/s [2024-11-26T17:24:56.786Z] 8423.29 IOPS, 32.90 MiB/s [2024-11-26T17:24:56.786Z] 8402.00 IOPS, 32.82 MiB/s [2024-11-26T17:24:56.786Z] 8383.89 IOPS, 32.75 MiB/s [2024-11-26T17:24:56.786Z] 8397.10 IOPS, 32.80 MiB/s [2024-11-26T17:24:56.786Z] 8407.45 IOPS, 32.84 MiB/s [2024-11-26T17:24:56.786Z] 8401.00 IOPS, 32.82 MiB/s [2024-11-26T17:24:56.786Z] 8401.08 IOPS, 32.82 MiB/s [2024-11-26T17:24:56.786Z] 8400.43 IOPS, 32.81 MiB/s [2024-11-26T17:24:56.786Z] 8416.47 IOPS, 32.88 MiB/s [2024-11-26T17:24:56.786Z] [2024-11-26 18:24:37.141823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.141884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.141955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.141977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.142977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.142999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.143015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.143038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.143054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.143077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.775 [2024-11-26 18:24:37.143093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:08.775 [2024-11-26 18:24:37.143116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.143663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.143701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.143744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.143784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.143822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.143861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.143899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.143938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.143961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.143977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.776 [2024-11-26 18:24:37.144624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.776 [2024-11-26 18:24:37.144664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:08.776 [2024-11-26 18:24:37.144872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.144895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.144931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.144950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.144976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.144993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.145871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.777 [2024-11-26 18:24:37.145913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.777 [2024-11-26 18:24:37.145956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.145982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.777 [2024-11-26 18:24:37.146003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.777 [2024-11-26 18:24:37.146047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.777 [2024-11-26 18:24:37.146089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.777 [2024-11-26 18:24:37.146132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.777 [2024-11-26 18:24:37.146174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:08.777 [2024-11-26 18:24:37.146686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.777 [2024-11-26 18:24:37.146704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.146732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.146748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.146776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.146793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.146820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.146837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.146864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.146881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.146909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.146925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.146953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.146969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.146997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.778 [2024-11-26 18:24:37.147781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:37.147826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:37.147872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:37.147916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:37.147960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.147988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:37.148005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.148033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:37.148050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.148077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:37.148094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:37.148122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:37.148139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:08.778 7929.88 IOPS, 30.98 MiB/s [2024-11-26T17:24:56.789Z] 7463.41 IOPS, 29.15 MiB/s [2024-11-26T17:24:56.789Z] 7048.78 IOPS, 27.53 MiB/s [2024-11-26T17:24:56.789Z] 6677.79 IOPS, 26.09 MiB/s [2024-11-26T17:24:56.789Z] 6733.15 IOPS, 26.30 MiB/s [2024-11-26T17:24:56.789Z] 6810.62 IOPS, 26.60 MiB/s [2024-11-26T17:24:56.789Z] 6908.45 IOPS, 26.99 MiB/s [2024-11-26T17:24:56.789Z] 7090.87 IOPS, 27.70 MiB/s [2024-11-26T17:24:56.789Z] 7267.25 IOPS, 28.39 MiB/s [2024-11-26T17:24:56.789Z] 7423.00 IOPS, 29.00 MiB/s [2024-11-26T17:24:56.789Z] 7461.15 IOPS, 29.15 MiB/s [2024-11-26T17:24:56.789Z] 7493.52 IOPS, 29.27 MiB/s [2024-11-26T17:24:56.789Z] 7519.25 IOPS, 29.37 MiB/s [2024-11-26T17:24:56.789Z] 7598.72 IOPS, 29.68 MiB/s [2024-11-26T17:24:56.789Z] 7706.10 IOPS, 30.10 MiB/s [2024-11-26T17:24:56.789Z] 7822.32 IOPS, 30.56 MiB/s [2024-11-26T17:24:56.789Z] [2024-11-26 18:24:53.758507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:53.758583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:53.758648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:53.758685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:53.758721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:53.758739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:08.778 [2024-11-26 18:24:53.758762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.778 [2024-11-26 18:24:53.758777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:30216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.759973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.759989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:30160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.779 [2024-11-26 18:24:53.760893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.779 [2024-11-26 18:24:53.760940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.760962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.760978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.761001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.761017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.761039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.761055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.761076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.779 [2024-11-26 18:24:53.761092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.761114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.779 [2024-11-26 18:24:53.761130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:08.779 [2024-11-26 18:24:53.761166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.780 [2024-11-26 18:24:53.761182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.780 [2024-11-26 18:24:53.761219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.780 [2024-11-26 18:24:53.761261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.780 [2024-11-26 18:24:53.761324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.780 [2024-11-26 18:24:53.761366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.780 [2024-11-26 18:24:53.761404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.780 [2024-11-26 18:24:53.761442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.780 [2024-11-26 18:24:53.761481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.780 [2024-11-26 18:24:53.761528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.780 [2024-11-26 18:24:53.761566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.780 [2024-11-26 18:24:53.761604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:08.780 [2024-11-26 18:24:53.761626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:08.780 [2024-11-26 18:24:53.761643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:08.780 7892.84 IOPS, 30.83 MiB/s [2024-11-26T17:24:56.791Z] 7911.15 IOPS, 30.90 MiB/s [2024-11-26T17:24:56.791Z] 7926.18 IOPS, 30.96 MiB/s [2024-11-26T17:24:56.791Z] Received shutdown signal, test time was about 34.401724 seconds 00:29:08.780 00:29:08.780 Latency(us) 00:29:08.780 [2024-11-26T17:24:56.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.780 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:08.780 Verification LBA range: start 0x0 length 0x4000 00:29:08.780 Nvme0n1 : 34.40 7929.50 30.97 0.00 0.00 16115.53 371.67 4026531.84 00:29:08.780 [2024-11-26T17:24:56.791Z] =================================================================================================================== 00:29:08.780 [2024-11-26T17:24:56.791Z] Total : 7929.50 30.97 0.00 0.00 16115.53 371.67 4026531.84 00:29:08.780 18:24:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.047 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.047 rmmod nvme_tcp 00:29:09.047 rmmod nvme_fabrics 00:29:09.305 rmmod nvme_keyring 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 690162 ']' 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 690162 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 690162 ']' 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 690162 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690162 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690162' 00:29:09.305 killing process with pid 690162 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 690162 00:29:09.305 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 690162 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.564 18:24:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.464 18:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:11.464 00:29:11.464 real 0m43.492s 00:29:11.464 user 2m12.733s 00:29:11.464 sys 0m10.540s 00:29:11.465 18:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.465 18:24:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:29:11.465 ************************************ 00:29:11.465 END TEST nvmf_host_multipath_status 00:29:11.465 ************************************ 00:29:11.465 18:24:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:11.465 18:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:11.465 18:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.465 18:24:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.465 ************************************ 00:29:11.465 START TEST nvmf_discovery_remove_ifc 00:29:11.465 ************************************ 00:29:11.465 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:11.723 * Looking for test storage... 00:29:11.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:29:11.723 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:11.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.724 --rc genhtml_branch_coverage=1 00:29:11.724 --rc genhtml_function_coverage=1 00:29:11.724 --rc genhtml_legend=1 00:29:11.724 --rc geninfo_all_blocks=1 00:29:11.724 --rc geninfo_unexecuted_blocks=1 00:29:11.724 00:29:11.724 ' 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:11.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.724 --rc genhtml_branch_coverage=1 00:29:11.724 --rc genhtml_function_coverage=1 00:29:11.724 --rc genhtml_legend=1 00:29:11.724 --rc geninfo_all_blocks=1 00:29:11.724 --rc geninfo_unexecuted_blocks=1 00:29:11.724 00:29:11.724 ' 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:11.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.724 --rc genhtml_branch_coverage=1 00:29:11.724 --rc genhtml_function_coverage=1 00:29:11.724 --rc genhtml_legend=1 00:29:11.724 --rc geninfo_all_blocks=1 00:29:11.724 --rc geninfo_unexecuted_blocks=1 00:29:11.724 00:29:11.724 ' 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:11.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.724 --rc genhtml_branch_coverage=1 00:29:11.724 --rc genhtml_function_coverage=1 00:29:11.724 --rc genhtml_legend=1 00:29:11.724 --rc geninfo_all_blocks=1 00:29:11.724 --rc geninfo_unexecuted_blocks=1 00:29:11.724 00:29:11.724 ' 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.724 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.725 18:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:13.626 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:13.626 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:13.626 Found net devices under 0000:09:00.0: cvl_0_0 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:13.626 Found net devices under 0000:09:00.1: cvl_0_1 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:13.626 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:13.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:13.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:29:13.885 00:29:13.885 --- 10.0.0.2 ping statistics --- 00:29:13.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.885 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:13.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:13.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:29:13.885 00:29:13.885 --- 10.0.0.1 ping statistics --- 00:29:13.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:13.885 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=696821 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 696821 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 696821 ']' 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.885 18:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:13.885 [2024-11-26 18:25:01.827883] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:29:13.885 [2024-11-26 18:25:01.827976] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.144 [2024-11-26 18:25:01.903481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.144 [2024-11-26 18:25:01.962760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.144 [2024-11-26 18:25:01.962809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.144 [2024-11-26 18:25:01.962839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.144 [2024-11-26 18:25:01.962850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.144 [2024-11-26 18:25:01.962860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.144 [2024-11-26 18:25:01.963496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.144 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.144 [2024-11-26 18:25:02.119132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.144 [2024-11-26 18:25:02.127336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:14.144 null0 00:29:14.402 [2024-11-26 18:25:02.159240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=696962 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 696962 /tmp/host.sock 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 696962 ']' 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:14.402 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.402 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.402 [2024-11-26 18:25:02.224563] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:29:14.402 [2024-11-26 18:25:02.224643] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696962 ] 00:29:14.402 [2024-11-26 18:25:02.289834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.402 [2024-11-26 18:25:02.348743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.660 18:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:16.035 [2024-11-26 18:25:03.638470] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:16.035 [2024-11-26 18:25:03.638495] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:16.035 [2024-11-26 18:25:03.638523] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:16.035 [2024-11-26 18:25:03.724824] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:16.035 [2024-11-26 18:25:03.785618] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:29:16.035 [2024-11-26 18:25:03.786547] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x239ffd0:1 started. 00:29:16.035 [2024-11-26 18:25:03.788172] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:16.035 [2024-11-26 18:25:03.788226] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:16.035 [2024-11-26 18:25:03.788265] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:16.035 [2024-11-26 18:25:03.788309] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:16.035 [2024-11-26 18:25:03.788341] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:16.035 [2024-11-26 18:25:03.795150] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x239ffd0 was disconnected and freed. delete nvme_qpair. 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:16.035 18:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:16.968 18:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:18.341 18:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:18.341 18:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:18.341 18:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.341 18:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:18.341 18:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:18.341 18:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:18.341 18:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:18.341 18:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.341 18:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:18.341 18:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:19.275 18:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:20.208 18:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:21.142 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:21.142 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:21.142 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:21.142 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.142 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:21.142 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:21.142 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:21.142 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.399 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:21.399 18:25:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:21.399 [2024-11-26 18:25:09.229672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:21.399 [2024-11-26 18:25:09.229746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.399 [2024-11-26 18:25:09.229766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-26 18:25:09.229783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.399 [2024-11-26 18:25:09.229796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-26 18:25:09.229809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.399 [2024-11-26 18:25:09.229822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-26 18:25:09.229834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.399 [2024-11-26 18:25:09.229847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-26 18:25:09.229859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:21.399 [2024-11-26 18:25:09.229872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:21.399 [2024-11-26 18:25:09.229884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237c860 is same with the state(6) to be set 00:29:21.399 [2024-11-26 18:25:09.239691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237c860 (9): Bad file descriptor 00:29:21.399 [2024-11-26 18:25:09.249734] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:21.399 [2024-11-26 18:25:09.249756] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:21.399 [2024-11-26 18:25:09.249766] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:21.399 [2024-11-26 18:25:09.249774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:21.399 [2024-11-26 18:25:09.249830] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:22.332 [2024-11-26 18:25:10.294338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:22.332 [2024-11-26 18:25:10.294405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237c860 with addr=10.0.0.2, port=4420 00:29:22.332 [2024-11-26 18:25:10.294447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237c860 is same with the state(6) to be set 00:29:22.332 [2024-11-26 18:25:10.294505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237c860 (9): Bad file descriptor 00:29:22.332 [2024-11-26 18:25:10.294994] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:29:22.332 [2024-11-26 18:25:10.295035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:22.332 [2024-11-26 18:25:10.295051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:22.332 [2024-11-26 18:25:10.295066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:22.332 [2024-11-26 18:25:10.295078] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:22.332 [2024-11-26 18:25:10.295089] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:22.332 [2024-11-26 18:25:10.295097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:22.332 [2024-11-26 18:25:10.295111] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:22.332 [2024-11-26 18:25:10.295120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:22.332 18:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:23.704 [2024-11-26 18:25:11.297612] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:23.704 [2024-11-26 18:25:11.297654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:23.704 [2024-11-26 18:25:11.297673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:23.704 [2024-11-26 18:25:11.297685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:23.704 [2024-11-26 18:25:11.297697] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:29:23.704 [2024-11-26 18:25:11.297723] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:23.704 [2024-11-26 18:25:11.297732] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:23.704 [2024-11-26 18:25:11.297739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:23.704 [2024-11-26 18:25:11.297783] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:23.704 [2024-11-26 18:25:11.297820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.704 [2024-11-26 18:25:11.297838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.704 [2024-11-26 18:25:11.297855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.704 [2024-11-26 18:25:11.297867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.704 [2024-11-26 18:25:11.297880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.704 [2024-11-26 18:25:11.297897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.704 [2024-11-26 18:25:11.297910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.704 [2024-11-26 18:25:11.297921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.704 [2024-11-26 18:25:11.297934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:23.704 [2024-11-26 18:25:11.297946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:23.704 [2024-11-26 18:25:11.297957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:29:23.704 [2024-11-26 18:25:11.298115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236bb50 (9): Bad file descriptor 00:29:23.704 [2024-11-26 18:25:11.299133] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:23.704 [2024-11-26 18:25:11.299154] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:23.704 18:25:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:24.637 18:25:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:25.571 [2024-11-26 18:25:13.311781] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:25.571 [2024-11-26 18:25:13.311805] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:25.571 [2024-11-26 18:25:13.311826] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:25.571 [2024-11-26 18:25:13.398104] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:25.571 18:25:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:25.828 [2024-11-26 18:25:13.613255] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:29:25.828 [2024-11-26 18:25:13.614227] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2355b40:1 started. 00:29:25.828 [2024-11-26 18:25:13.615666] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:25.828 [2024-11-26 18:25:13.615710] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:25.828 [2024-11-26 18:25:13.615742] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:25.828 [2024-11-26 18:25:13.615765] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:25.828 [2024-11-26 18:25:13.615780] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:25.828 [2024-11-26 18:25:13.621062] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2355b40 was disconnected and freed. delete nvme_qpair. 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 696962 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 696962 ']' 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 696962 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696962 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696962' 00:29:26.760 killing process with pid 696962 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 696962 00:29:26.760 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 696962 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.018 rmmod nvme_tcp 00:29:27.018 rmmod nvme_fabrics 00:29:27.018 rmmod nvme_keyring 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 696821 ']' 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 696821 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 696821 ']' 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 696821 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696821 00:29:27.018 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.019 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.019 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696821' 00:29:27.019 killing process with pid 696821 00:29:27.019 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 696821 00:29:27.019 18:25:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 696821 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.278 18:25:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.183 18:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.442 00:29:29.442 real 0m17.747s 00:29:29.442 user 0m25.743s 00:29:29.442 sys 0m3.049s 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:29:29.442 ************************************ 00:29:29.442 END TEST nvmf_discovery_remove_ifc 00:29:29.442 ************************************ 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.442 ************************************ 00:29:29.442 START TEST nvmf_identify_kernel_target 00:29:29.442 ************************************ 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:29:29.442 * Looking for test storage... 00:29:29.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.442 --rc genhtml_branch_coverage=1 00:29:29.442 --rc genhtml_function_coverage=1 00:29:29.442 --rc genhtml_legend=1 00:29:29.442 --rc geninfo_all_blocks=1 00:29:29.442 --rc geninfo_unexecuted_blocks=1 00:29:29.442 00:29:29.442 ' 00:29:29.442 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.442 --rc genhtml_branch_coverage=1 00:29:29.442 --rc genhtml_function_coverage=1 00:29:29.442 --rc genhtml_legend=1 00:29:29.442 --rc geninfo_all_blocks=1 00:29:29.442 --rc geninfo_unexecuted_blocks=1 00:29:29.442 00:29:29.442 ' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.443 --rc genhtml_branch_coverage=1 00:29:29.443 --rc genhtml_function_coverage=1 00:29:29.443 --rc genhtml_legend=1 00:29:29.443 --rc geninfo_all_blocks=1 00:29:29.443 --rc geninfo_unexecuted_blocks=1 00:29:29.443 00:29:29.443 ' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.443 --rc genhtml_branch_coverage=1 00:29:29.443 --rc genhtml_function_coverage=1 00:29:29.443 --rc genhtml_legend=1 00:29:29.443 --rc geninfo_all_blocks=1 00:29:29.443 --rc geninfo_unexecuted_blocks=1 00:29:29.443 00:29:29.443 ' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.443 18:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:31.974 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:31.974 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.974 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:31.975 Found net devices under 0000:09:00.0: cvl_0_0 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:31.975 Found net devices under 0000:09:00.1: cvl_0_1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:29:31.975 00:29:31.975 --- 10.0.0.2 ping statistics --- 00:29:31.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.975 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:29:31.975 00:29:31.975 --- 10.0.0.1 ping statistics --- 00:29:31.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.975 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:31.975 18:25:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:32.910 Waiting for block devices as requested 00:29:32.910 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:32.910 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:33.169 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:33.169 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:33.169 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:33.169 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:33.169 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:33.426 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:33.426 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:29:33.426 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:33.684 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:33.684 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:33.684 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:33.684 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:33.944 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:33.944 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:33.944 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:34.203 No valid GPT data, bailing 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:29:34.203 00:29:34.203 Discovery Log Number of Records 2, Generation counter 2 00:29:34.203 =====Discovery Log Entry 0====== 00:29:34.203 trtype: tcp 00:29:34.203 adrfam: ipv4 00:29:34.203 subtype: current discovery subsystem 00:29:34.203 treq: not specified, sq flow control disable supported 00:29:34.203 portid: 1 00:29:34.203 trsvcid: 4420 00:29:34.203 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:34.203 traddr: 10.0.0.1 00:29:34.203 eflags: none 00:29:34.203 sectype: none 00:29:34.203 =====Discovery Log Entry 1====== 00:29:34.203 trtype: tcp 00:29:34.203 adrfam: ipv4 00:29:34.203 subtype: nvme subsystem 00:29:34.203 treq: not specified, sq flow control disable supported 00:29:34.203 portid: 1 00:29:34.203 trsvcid: 4420 00:29:34.203 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:34.203 traddr: 10.0.0.1 00:29:34.203 eflags: none 00:29:34.203 sectype: none 00:29:34.203 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:29:34.203 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:29:34.463 ===================================================== 00:29:34.463 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:34.463 ===================================================== 00:29:34.463 Controller Capabilities/Features 00:29:34.463 ================================ 00:29:34.463 Vendor ID: 0000 00:29:34.463 Subsystem Vendor ID: 0000 00:29:34.463 Serial Number: e0801507ae825e8e7061 00:29:34.463 Model Number: Linux 00:29:34.463 Firmware Version: 6.8.9-20 00:29:34.463 Recommended Arb Burst: 0 00:29:34.463 IEEE OUI Identifier: 00 00 00 00:29:34.463 Multi-path I/O 00:29:34.463 May have multiple subsystem ports: No 00:29:34.463 May have multiple controllers: No 00:29:34.463 Associated with SR-IOV VF: No 00:29:34.463 Max Data Transfer Size: Unlimited 00:29:34.463 Max Number of Namespaces: 0 00:29:34.463 Max Number of I/O Queues: 1024 00:29:34.463 NVMe Specification Version (VS): 1.3 00:29:34.463 NVMe Specification Version (Identify): 1.3 00:29:34.463 Maximum Queue Entries: 1024 00:29:34.463 Contiguous Queues Required: No 00:29:34.463 Arbitration Mechanisms Supported 00:29:34.463 Weighted Round Robin: Not Supported 00:29:34.463 Vendor Specific: Not Supported 00:29:34.463 Reset Timeout: 7500 ms 00:29:34.463 Doorbell Stride: 4 bytes 00:29:34.463 NVM Subsystem Reset: Not Supported 00:29:34.463 Command Sets Supported 00:29:34.463 NVM Command Set: Supported 00:29:34.463 Boot Partition: Not Supported 00:29:34.463 Memory Page Size Minimum: 4096 bytes 00:29:34.463 Memory Page Size Maximum: 4096 bytes 00:29:34.463 Persistent Memory Region: Not Supported 00:29:34.463 Optional Asynchronous Events Supported 00:29:34.463 Namespace Attribute Notices: Not Supported 00:29:34.463 Firmware Activation Notices: Not Supported 00:29:34.463 ANA Change Notices: Not Supported 00:29:34.463 PLE Aggregate Log Change Notices: Not Supported 00:29:34.463 LBA Status Info Alert Notices: Not Supported 00:29:34.463 EGE Aggregate Log Change Notices: Not Supported 00:29:34.463 Normal NVM Subsystem Shutdown event: Not Supported 00:29:34.463 Zone Descriptor Change Notices: Not Supported 00:29:34.463 Discovery Log Change Notices: Supported 00:29:34.463 Controller Attributes 00:29:34.463 128-bit Host Identifier: Not Supported 00:29:34.463 Non-Operational Permissive Mode: Not Supported 00:29:34.463 NVM Sets: Not Supported 00:29:34.463 Read Recovery Levels: Not Supported 00:29:34.463 Endurance Groups: Not Supported 00:29:34.463 Predictable Latency Mode: Not Supported 00:29:34.463 Traffic Based Keep ALive: Not Supported 00:29:34.463 Namespace Granularity: Not Supported 00:29:34.463 SQ Associations: Not Supported 00:29:34.463 UUID List: Not Supported 00:29:34.463 Multi-Domain Subsystem: Not Supported 00:29:34.463 Fixed Capacity Management: Not Supported 00:29:34.463 Variable Capacity Management: Not Supported 00:29:34.463 Delete Endurance Group: Not Supported 00:29:34.463 Delete NVM Set: Not Supported 00:29:34.463 Extended LBA Formats Supported: Not Supported 00:29:34.463 Flexible Data Placement Supported: Not Supported 00:29:34.463 00:29:34.463 Controller Memory Buffer Support 00:29:34.463 ================================ 00:29:34.463 Supported: No 00:29:34.463 00:29:34.463 Persistent Memory Region Support 00:29:34.463 ================================ 00:29:34.463 Supported: No 00:29:34.464 00:29:34.464 Admin Command Set Attributes 00:29:34.464 ============================ 00:29:34.464 Security Send/Receive: Not Supported 00:29:34.464 Format NVM: Not Supported 00:29:34.464 Firmware Activate/Download: Not Supported 00:29:34.464 Namespace Management: Not Supported 00:29:34.464 Device Self-Test: Not Supported 00:29:34.464 Directives: Not Supported 00:29:34.464 NVMe-MI: Not Supported 00:29:34.464 Virtualization Management: Not Supported 00:29:34.464 Doorbell Buffer Config: Not Supported 00:29:34.464 Get LBA Status Capability: Not Supported 00:29:34.464 Command & Feature Lockdown Capability: Not Supported 00:29:34.464 Abort Command Limit: 1 00:29:34.464 Async Event Request Limit: 1 00:29:34.464 Number of Firmware Slots: N/A 00:29:34.464 Firmware Slot 1 Read-Only: N/A 00:29:34.464 Firmware Activation Without Reset: N/A 00:29:34.464 Multiple Update Detection Support: N/A 00:29:34.464 Firmware Update Granularity: No Information Provided 00:29:34.464 Per-Namespace SMART Log: No 00:29:34.464 Asymmetric Namespace Access Log Page: Not Supported 00:29:34.464 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:34.464 Command Effects Log Page: Not Supported 00:29:34.464 Get Log Page Extended Data: Supported 00:29:34.464 Telemetry Log Pages: Not Supported 00:29:34.464 Persistent Event Log Pages: Not Supported 00:29:34.464 Supported Log Pages Log Page: May Support 00:29:34.464 Commands Supported & Effects Log Page: Not Supported 00:29:34.464 Feature Identifiers & Effects Log Page:May Support 00:29:34.464 NVMe-MI Commands & Effects Log Page: May Support 00:29:34.464 Data Area 4 for Telemetry Log: Not Supported 00:29:34.464 Error Log Page Entries Supported: 1 00:29:34.464 Keep Alive: Not Supported 00:29:34.464 00:29:34.464 NVM Command Set Attributes 00:29:34.464 ========================== 00:29:34.464 Submission Queue Entry Size 00:29:34.464 Max: 1 00:29:34.464 Min: 1 00:29:34.464 Completion Queue Entry Size 00:29:34.464 Max: 1 00:29:34.464 Min: 1 00:29:34.464 Number of Namespaces: 0 00:29:34.464 Compare Command: Not Supported 00:29:34.464 Write Uncorrectable Command: Not Supported 00:29:34.464 Dataset Management Command: Not Supported 00:29:34.464 Write Zeroes Command: Not Supported 00:29:34.464 Set Features Save Field: Not Supported 00:29:34.464 Reservations: Not Supported 00:29:34.464 Timestamp: Not Supported 00:29:34.464 Copy: Not Supported 00:29:34.464 Volatile Write Cache: Not Present 00:29:34.464 Atomic Write Unit (Normal): 1 00:29:34.464 Atomic Write Unit (PFail): 1 00:29:34.464 Atomic Compare & Write Unit: 1 00:29:34.464 Fused Compare & Write: Not Supported 00:29:34.464 Scatter-Gather List 00:29:34.464 SGL Command Set: Supported 00:29:34.464 SGL Keyed: Not Supported 00:29:34.464 SGL Bit Bucket Descriptor: Not Supported 00:29:34.464 SGL Metadata Pointer: Not Supported 00:29:34.464 Oversized SGL: Not Supported 00:29:34.464 SGL Metadata Address: Not Supported 00:29:34.464 SGL Offset: Supported 00:29:34.464 Transport SGL Data Block: Not Supported 00:29:34.464 Replay Protected Memory Block: Not Supported 00:29:34.464 00:29:34.464 Firmware Slot Information 00:29:34.464 ========================= 00:29:34.464 Active slot: 0 00:29:34.464 00:29:34.464 00:29:34.464 Error Log 00:29:34.464 ========= 00:29:34.464 00:29:34.464 Active Namespaces 00:29:34.464 ================= 00:29:34.464 Discovery Log Page 00:29:34.464 ================== 00:29:34.464 Generation Counter: 2 00:29:34.464 Number of Records: 2 00:29:34.464 Record Format: 0 00:29:34.464 00:29:34.464 Discovery Log Entry 0 00:29:34.464 ---------------------- 00:29:34.464 Transport Type: 3 (TCP) 00:29:34.464 Address Family: 1 (IPv4) 00:29:34.464 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:34.464 Entry Flags: 00:29:34.464 Duplicate Returned Information: 0 00:29:34.464 Explicit Persistent Connection Support for Discovery: 0 00:29:34.464 Transport Requirements: 00:29:34.464 Secure Channel: Not Specified 00:29:34.464 Port ID: 1 (0x0001) 00:29:34.464 Controller ID: 65535 (0xffff) 00:29:34.464 Admin Max SQ Size: 32 00:29:34.464 Transport Service Identifier: 4420 00:29:34.464 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:34.464 Transport Address: 10.0.0.1 00:29:34.464 Discovery Log Entry 1 00:29:34.464 ---------------------- 00:29:34.464 Transport Type: 3 (TCP) 00:29:34.464 Address Family: 1 (IPv4) 00:29:34.464 Subsystem Type: 2 (NVM Subsystem) 00:29:34.464 Entry Flags: 00:29:34.464 Duplicate Returned Information: 0 00:29:34.464 Explicit Persistent Connection Support for Discovery: 0 00:29:34.464 Transport Requirements: 00:29:34.464 Secure Channel: Not Specified 00:29:34.464 Port ID: 1 (0x0001) 00:29:34.464 Controller ID: 65535 (0xffff) 00:29:34.464 Admin Max SQ Size: 32 00:29:34.464 Transport Service Identifier: 4420 00:29:34.464 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:29:34.464 Transport Address: 10.0.0.1 00:29:34.464 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:34.464 get_feature(0x01) failed 00:29:34.464 get_feature(0x02) failed 00:29:34.464 get_feature(0x04) failed 00:29:34.464 ===================================================== 00:29:34.464 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:34.464 ===================================================== 00:29:34.464 Controller Capabilities/Features 00:29:34.464 ================================ 00:29:34.464 Vendor ID: 0000 00:29:34.464 Subsystem Vendor ID: 0000 00:29:34.464 Serial Number: 702a96b9c8a54974fab0 00:29:34.464 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:29:34.464 Firmware Version: 6.8.9-20 00:29:34.464 Recommended Arb Burst: 6 00:29:34.464 IEEE OUI Identifier: 00 00 00 00:29:34.464 Multi-path I/O 00:29:34.464 May have multiple subsystem ports: Yes 00:29:34.464 May have multiple controllers: Yes 00:29:34.464 Associated with SR-IOV VF: No 00:29:34.464 Max Data Transfer Size: Unlimited 00:29:34.464 Max Number of Namespaces: 1024 00:29:34.464 Max Number of I/O Queues: 128 00:29:34.464 NVMe Specification Version (VS): 1.3 00:29:34.464 NVMe Specification Version (Identify): 1.3 00:29:34.464 Maximum Queue Entries: 1024 00:29:34.464 Contiguous Queues Required: No 00:29:34.464 Arbitration Mechanisms Supported 00:29:34.464 Weighted Round Robin: Not Supported 00:29:34.464 Vendor Specific: Not Supported 00:29:34.464 Reset Timeout: 7500 ms 00:29:34.464 Doorbell Stride: 4 bytes 00:29:34.464 NVM Subsystem Reset: Not Supported 00:29:34.464 Command Sets Supported 00:29:34.464 NVM Command Set: Supported 00:29:34.464 Boot Partition: Not Supported 00:29:34.464 Memory Page Size Minimum: 4096 bytes 00:29:34.464 Memory Page Size Maximum: 4096 bytes 00:29:34.464 Persistent Memory Region: Not Supported 00:29:34.464 Optional Asynchronous Events Supported 00:29:34.464 Namespace Attribute Notices: Supported 00:29:34.464 Firmware Activation Notices: Not Supported 00:29:34.464 ANA Change Notices: Supported 00:29:34.464 PLE Aggregate Log Change Notices: Not Supported 00:29:34.464 LBA Status Info Alert Notices: Not Supported 00:29:34.464 EGE Aggregate Log Change Notices: Not Supported 00:29:34.464 Normal NVM Subsystem Shutdown event: Not Supported 00:29:34.464 Zone Descriptor Change Notices: Not Supported 00:29:34.464 Discovery Log Change Notices: Not Supported 00:29:34.464 Controller Attributes 00:29:34.464 128-bit Host Identifier: Supported 00:29:34.464 Non-Operational Permissive Mode: Not Supported 00:29:34.464 NVM Sets: Not Supported 00:29:34.464 Read Recovery Levels: Not Supported 00:29:34.464 Endurance Groups: Not Supported 00:29:34.464 Predictable Latency Mode: Not Supported 00:29:34.464 Traffic Based Keep ALive: Supported 00:29:34.464 Namespace Granularity: Not Supported 00:29:34.464 SQ Associations: Not Supported 00:29:34.464 UUID List: Not Supported 00:29:34.464 Multi-Domain Subsystem: Not Supported 00:29:34.464 Fixed Capacity Management: Not Supported 00:29:34.464 Variable Capacity Management: Not Supported 00:29:34.464 Delete Endurance Group: Not Supported 00:29:34.464 Delete NVM Set: Not Supported 00:29:34.464 Extended LBA Formats Supported: Not Supported 00:29:34.464 Flexible Data Placement Supported: Not Supported 00:29:34.464 00:29:34.464 Controller Memory Buffer Support 00:29:34.464 ================================ 00:29:34.464 Supported: No 00:29:34.464 00:29:34.464 Persistent Memory Region Support 00:29:34.464 ================================ 00:29:34.465 Supported: No 00:29:34.465 00:29:34.465 Admin Command Set Attributes 00:29:34.465 ============================ 00:29:34.465 Security Send/Receive: Not Supported 00:29:34.465 Format NVM: Not Supported 00:29:34.465 Firmware Activate/Download: Not Supported 00:29:34.465 Namespace Management: Not Supported 00:29:34.465 Device Self-Test: Not Supported 00:29:34.465 Directives: Not Supported 00:29:34.465 NVMe-MI: Not Supported 00:29:34.465 Virtualization Management: Not Supported 00:29:34.465 Doorbell Buffer Config: Not Supported 00:29:34.465 Get LBA Status Capability: Not Supported 00:29:34.465 Command & Feature Lockdown Capability: Not Supported 00:29:34.465 Abort Command Limit: 4 00:29:34.465 Async Event Request Limit: 4 00:29:34.465 Number of Firmware Slots: N/A 00:29:34.465 Firmware Slot 1 Read-Only: N/A 00:29:34.465 Firmware Activation Without Reset: N/A 00:29:34.465 Multiple Update Detection Support: N/A 00:29:34.465 Firmware Update Granularity: No Information Provided 00:29:34.465 Per-Namespace SMART Log: Yes 00:29:34.465 Asymmetric Namespace Access Log Page: Supported 00:29:34.465 ANA Transition Time : 10 sec 00:29:34.465 00:29:34.465 Asymmetric Namespace Access Capabilities 00:29:34.465 ANA Optimized State : Supported 00:29:34.465 ANA Non-Optimized State : Supported 00:29:34.465 ANA Inaccessible State : Supported 00:29:34.465 ANA Persistent Loss State : Supported 00:29:34.465 ANA Change State : Supported 00:29:34.465 ANAGRPID is not changed : No 00:29:34.465 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:29:34.465 00:29:34.465 ANA Group Identifier Maximum : 128 00:29:34.465 Number of ANA Group Identifiers : 128 00:29:34.465 Max Number of Allowed Namespaces : 1024 00:29:34.465 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:29:34.465 Command Effects Log Page: Supported 00:29:34.465 Get Log Page Extended Data: Supported 00:29:34.465 Telemetry Log Pages: Not Supported 00:29:34.465 Persistent Event Log Pages: Not Supported 00:29:34.465 Supported Log Pages Log Page: May Support 00:29:34.465 Commands Supported & Effects Log Page: Not Supported 00:29:34.465 Feature Identifiers & Effects Log Page:May Support 00:29:34.465 NVMe-MI Commands & Effects Log Page: May Support 00:29:34.465 Data Area 4 for Telemetry Log: Not Supported 00:29:34.465 Error Log Page Entries Supported: 128 00:29:34.465 Keep Alive: Supported 00:29:34.465 Keep Alive Granularity: 1000 ms 00:29:34.465 00:29:34.465 NVM Command Set Attributes 00:29:34.465 ========================== 00:29:34.465 Submission Queue Entry Size 00:29:34.465 Max: 64 00:29:34.465 Min: 64 00:29:34.465 Completion Queue Entry Size 00:29:34.465 Max: 16 00:29:34.465 Min: 16 00:29:34.465 Number of Namespaces: 1024 00:29:34.465 Compare Command: Not Supported 00:29:34.465 Write Uncorrectable Command: Not Supported 00:29:34.465 Dataset Management Command: Supported 00:29:34.465 Write Zeroes Command: Supported 00:29:34.465 Set Features Save Field: Not Supported 00:29:34.465 Reservations: Not Supported 00:29:34.465 Timestamp: Not Supported 00:29:34.465 Copy: Not Supported 00:29:34.465 Volatile Write Cache: Present 00:29:34.465 Atomic Write Unit (Normal): 1 00:29:34.465 Atomic Write Unit (PFail): 1 00:29:34.465 Atomic Compare & Write Unit: 1 00:29:34.465 Fused Compare & Write: Not Supported 00:29:34.465 Scatter-Gather List 00:29:34.465 SGL Command Set: Supported 00:29:34.465 SGL Keyed: Not Supported 00:29:34.465 SGL Bit Bucket Descriptor: Not Supported 00:29:34.465 SGL Metadata Pointer: Not Supported 00:29:34.465 Oversized SGL: Not Supported 00:29:34.465 SGL Metadata Address: Not Supported 00:29:34.465 SGL Offset: Supported 00:29:34.465 Transport SGL Data Block: Not Supported 00:29:34.465 Replay Protected Memory Block: Not Supported 00:29:34.465 00:29:34.465 Firmware Slot Information 00:29:34.465 ========================= 00:29:34.465 Active slot: 0 00:29:34.465 00:29:34.465 Asymmetric Namespace Access 00:29:34.465 =========================== 00:29:34.465 Change Count : 0 00:29:34.465 Number of ANA Group Descriptors : 1 00:29:34.465 ANA Group Descriptor : 0 00:29:34.465 ANA Group ID : 1 00:29:34.465 Number of NSID Values : 1 00:29:34.465 Change Count : 0 00:29:34.465 ANA State : 1 00:29:34.465 Namespace Identifier : 1 00:29:34.465 00:29:34.465 Commands Supported and Effects 00:29:34.465 ============================== 00:29:34.465 Admin Commands 00:29:34.465 -------------- 00:29:34.465 Get Log Page (02h): Supported 00:29:34.465 Identify (06h): Supported 00:29:34.465 Abort (08h): Supported 00:29:34.465 Set Features (09h): Supported 00:29:34.465 Get Features (0Ah): Supported 00:29:34.465 Asynchronous Event Request (0Ch): Supported 00:29:34.465 Keep Alive (18h): Supported 00:29:34.465 I/O Commands 00:29:34.465 ------------ 00:29:34.465 Flush (00h): Supported 00:29:34.465 Write (01h): Supported LBA-Change 00:29:34.465 Read (02h): Supported 00:29:34.465 Write Zeroes (08h): Supported LBA-Change 00:29:34.465 Dataset Management (09h): Supported 00:29:34.465 00:29:34.465 Error Log 00:29:34.465 ========= 00:29:34.465 Entry: 0 00:29:34.465 Error Count: 0x3 00:29:34.465 Submission Queue Id: 0x0 00:29:34.465 Command Id: 0x5 00:29:34.465 Phase Bit: 0 00:29:34.465 Status Code: 0x2 00:29:34.465 Status Code Type: 0x0 00:29:34.465 Do Not Retry: 1 00:29:34.465 Error Location: 0x28 00:29:34.465 LBA: 0x0 00:29:34.465 Namespace: 0x0 00:29:34.465 Vendor Log Page: 0x0 00:29:34.465 ----------- 00:29:34.465 Entry: 1 00:29:34.465 Error Count: 0x2 00:29:34.465 Submission Queue Id: 0x0 00:29:34.465 Command Id: 0x5 00:29:34.465 Phase Bit: 0 00:29:34.465 Status Code: 0x2 00:29:34.465 Status Code Type: 0x0 00:29:34.465 Do Not Retry: 1 00:29:34.465 Error Location: 0x28 00:29:34.465 LBA: 0x0 00:29:34.465 Namespace: 0x0 00:29:34.465 Vendor Log Page: 0x0 00:29:34.465 ----------- 00:29:34.465 Entry: 2 00:29:34.465 Error Count: 0x1 00:29:34.465 Submission Queue Id: 0x0 00:29:34.465 Command Id: 0x4 00:29:34.465 Phase Bit: 0 00:29:34.465 Status Code: 0x2 00:29:34.465 Status Code Type: 0x0 00:29:34.465 Do Not Retry: 1 00:29:34.465 Error Location: 0x28 00:29:34.465 LBA: 0x0 00:29:34.465 Namespace: 0x0 00:29:34.465 Vendor Log Page: 0x0 00:29:34.465 00:29:34.465 Number of Queues 00:29:34.465 ================ 00:29:34.465 Number of I/O Submission Queues: 128 00:29:34.465 Number of I/O Completion Queues: 128 00:29:34.465 00:29:34.465 ZNS Specific Controller Data 00:29:34.465 ============================ 00:29:34.465 Zone Append Size Limit: 0 00:29:34.465 00:29:34.465 00:29:34.465 Active Namespaces 00:29:34.465 ================= 00:29:34.465 get_feature(0x05) failed 00:29:34.465 Namespace ID:1 00:29:34.465 Command Set Identifier: NVM (00h) 00:29:34.465 Deallocate: Supported 00:29:34.465 Deallocated/Unwritten Error: Not Supported 00:29:34.465 Deallocated Read Value: Unknown 00:29:34.465 Deallocate in Write Zeroes: Not Supported 00:29:34.465 Deallocated Guard Field: 0xFFFF 00:29:34.465 Flush: Supported 00:29:34.465 Reservation: Not Supported 00:29:34.465 Namespace Sharing Capabilities: Multiple Controllers 00:29:34.465 Size (in LBAs): 1953525168 (931GiB) 00:29:34.465 Capacity (in LBAs): 1953525168 (931GiB) 00:29:34.465 Utilization (in LBAs): 1953525168 (931GiB) 00:29:34.465 UUID: da0aab1b-32e3-482f-b4f0-112084d6bd17 00:29:34.465 Thin Provisioning: Not Supported 00:29:34.465 Per-NS Atomic Units: Yes 00:29:34.465 Atomic Boundary Size (Normal): 0 00:29:34.465 Atomic Boundary Size (PFail): 0 00:29:34.465 Atomic Boundary Offset: 0 00:29:34.465 NGUID/EUI64 Never Reused: No 00:29:34.465 ANA group ID: 1 00:29:34.465 Namespace Write Protected: No 00:29:34.465 Number of LBA Formats: 1 00:29:34.465 Current LBA Format: LBA Format #00 00:29:34.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:34.465 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.465 rmmod nvme_tcp 00:29:34.465 rmmod nvme_fabrics 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.465 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.466 18:25:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:37.003 18:25:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:37.989 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:37.989 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:37.989 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:37.989 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:37.989 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:37.989 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:37.990 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:37.990 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:37.990 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:29:37.990 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:29:37.990 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:29:37.990 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:29:37.990 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:29:37.990 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:29:37.990 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:29:37.990 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:29:38.927 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:29:39.186 00:29:39.186 real 0m9.761s 00:29:39.186 user 0m2.142s 00:29:39.186 sys 0m3.603s 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:29:39.186 ************************************ 00:29:39.186 END TEST nvmf_identify_kernel_target 00:29:39.186 ************************************ 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.186 ************************************ 00:29:39.186 START TEST nvmf_auth_host 00:29:39.186 ************************************ 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:29:39.186 * Looking for test storage... 00:29:39.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.186 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:39.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.445 --rc genhtml_branch_coverage=1 00:29:39.445 --rc genhtml_function_coverage=1 00:29:39.445 --rc genhtml_legend=1 00:29:39.445 --rc geninfo_all_blocks=1 00:29:39.445 --rc geninfo_unexecuted_blocks=1 00:29:39.445 00:29:39.445 ' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:39.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.445 --rc genhtml_branch_coverage=1 00:29:39.445 --rc genhtml_function_coverage=1 00:29:39.445 --rc genhtml_legend=1 00:29:39.445 --rc geninfo_all_blocks=1 00:29:39.445 --rc geninfo_unexecuted_blocks=1 00:29:39.445 00:29:39.445 ' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:39.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.445 --rc genhtml_branch_coverage=1 00:29:39.445 --rc genhtml_function_coverage=1 00:29:39.445 --rc genhtml_legend=1 00:29:39.445 --rc geninfo_all_blocks=1 00:29:39.445 --rc geninfo_unexecuted_blocks=1 00:29:39.445 00:29:39.445 ' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:39.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.445 --rc genhtml_branch_coverage=1 00:29:39.445 --rc genhtml_function_coverage=1 00:29:39.445 --rc genhtml_legend=1 00:29:39.445 --rc geninfo_all_blocks=1 00:29:39.445 --rc geninfo_unexecuted_blocks=1 00:29:39.445 00:29:39.445 ' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.445 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.446 18:25:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:41.345 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:41.345 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:41.345 Found net devices under 0000:09:00.0: cvl_0_0 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.345 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:41.346 Found net devices under 0000:09:00.1: cvl_0_1 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.346 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:29:41.604 00:29:41.604 --- 10.0.0.2 ping statistics --- 00:29:41.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.604 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:29:41.604 00:29:41.604 --- 10.0.0.1 ping statistics --- 00:29:41.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.604 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=704062 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 704062 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 704062 ']' 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.604 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=934beb22666ab9dd77d970530cb9bb42 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.r8c 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 934beb22666ab9dd77d970530cb9bb42 0 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 934beb22666ab9dd77d970530cb9bb42 0 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=934beb22666ab9dd77d970530cb9bb42 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.r8c 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.r8c 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.r8c 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=073f0391dd2a654ff12a5d16ae6bea9a1791b35fa87e5705d005374d330ad830 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ceH 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 073f0391dd2a654ff12a5d16ae6bea9a1791b35fa87e5705d005374d330ad830 3 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 073f0391dd2a654ff12a5d16ae6bea9a1791b35fa87e5705d005374d330ad830 3 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=073f0391dd2a654ff12a5d16ae6bea9a1791b35fa87e5705d005374d330ad830 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ceH 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ceH 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ceH 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=04ef74b11a1c2549e7a0b226d71d3f2bd5d241894f52ee9c 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.r2j 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 04ef74b11a1c2549e7a0b226d71d3f2bd5d241894f52ee9c 0 00:29:41.863 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 04ef74b11a1c2549e7a0b226d71d3f2bd5d241894f52ee9c 0 00:29:41.864 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:41.864 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:41.864 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=04ef74b11a1c2549e7a0b226d71d3f2bd5d241894f52ee9c 00:29:41.864 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:41.864 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.r2j 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.r2j 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.r2j 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=37c747266608fa6789200089e7ff0b5d45f5935ffecf6f2e 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.g7W 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 37c747266608fa6789200089e7ff0b5d45f5935ffecf6f2e 2 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 37c747266608fa6789200089e7ff0b5d45f5935ffecf6f2e 2 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=37c747266608fa6789200089e7ff0b5d45f5935ffecf6f2e 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.g7W 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.g7W 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.g7W 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=582a82dbbdddf47cce9e117506789df0 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zYv 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 582a82dbbdddf47cce9e117506789df0 1 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 582a82dbbdddf47cce9e117506789df0 1 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=582a82dbbdddf47cce9e117506789df0 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:42.123 18:25:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zYv 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zYv 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.zYv 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3470154dda9369d440d5154c3fa05829 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.FNb 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3470154dda9369d440d5154c3fa05829 1 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3470154dda9369d440d5154c3fa05829 1 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:42.123 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3470154dda9369d440d5154c3fa05829 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.FNb 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.FNb 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FNb 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9996113cf18c509db50ab236d0b10edf77fb81f5b843de65 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vZc 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9996113cf18c509db50ab236d0b10edf77fb81f5b843de65 2 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9996113cf18c509db50ab236d0b10edf77fb81f5b843de65 2 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9996113cf18c509db50ab236d0b10edf77fb81f5b843de65 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vZc 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vZc 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.vZc 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=658c4705a3dd67f1eb54a3b0e6ab5ea5 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Cey 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 658c4705a3dd67f1eb54a3b0e6ab5ea5 0 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 658c4705a3dd67f1eb54a3b0e6ab5ea5 0 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=658c4705a3dd67f1eb54a3b0e6ab5ea5 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:29:42.124 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Cey 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Cey 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Cey 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89a9af53818580722cb0b78363612c04dcc54e955646c4f77eceaebf68a3c69f 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.wtq 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89a9af53818580722cb0b78363612c04dcc54e955646c4f77eceaebf68a3c69f 3 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89a9af53818580722cb0b78363612c04dcc54e955646c4f77eceaebf68a3c69f 3 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:29:42.382 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89a9af53818580722cb0b78363612c04dcc54e955646c4f77eceaebf68a3c69f 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.wtq 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.wtq 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.wtq 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 704062 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 704062 ']' 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.383 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.r8c 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ceH ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ceH 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.r2j 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.g7W ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.g7W 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zYv 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FNb ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FNb 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vZc 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Cey ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Cey 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.wtq 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:42.641 18:25:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:43.575 Waiting for block devices as requested 00:29:43.833 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:43.833 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:43.833 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:44.090 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:44.090 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:44.090 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:44.091 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:44.348 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:44.348 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:29:44.348 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:44.607 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:44.607 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:44.607 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:44.865 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:44.865 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:44.865 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:44.865 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:45.432 No valid GPT data, bailing 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:29:45.432 00:29:45.432 Discovery Log Number of Records 2, Generation counter 2 00:29:45.432 =====Discovery Log Entry 0====== 00:29:45.432 trtype: tcp 00:29:45.432 adrfam: ipv4 00:29:45.432 subtype: current discovery subsystem 00:29:45.432 treq: not specified, sq flow control disable supported 00:29:45.432 portid: 1 00:29:45.432 trsvcid: 4420 00:29:45.432 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:45.432 traddr: 10.0.0.1 00:29:45.432 eflags: none 00:29:45.432 sectype: none 00:29:45.432 =====Discovery Log Entry 1====== 00:29:45.432 trtype: tcp 00:29:45.432 adrfam: ipv4 00:29:45.432 subtype: nvme subsystem 00:29:45.432 treq: not specified, sq flow control disable supported 00:29:45.432 portid: 1 00:29:45.432 trsvcid: 4420 00:29:45.432 subnqn: nqn.2024-02.io.spdk:cnode0 00:29:45.432 traddr: 10.0.0.1 00:29:45.432 eflags: none 00:29:45.432 sectype: none 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:29:45.432 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.691 nvme0n1 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.691 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.692 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.951 nvme0n1 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.951 18:25:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.210 nvme0n1 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.210 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.469 nvme0n1 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.469 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.727 nvme0n1 00:29:46.727 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.727 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.727 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.727 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.727 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.728 nvme0n1 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.728 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.986 nvme0n1 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:46.986 18:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.244 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.245 nvme0n1 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.245 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.503 nvme0n1 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.503 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:47.762 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.763 nvme0n1 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:47.763 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.021 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.022 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.022 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:48.022 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.022 18:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.022 nvme0n1 00:29:48.022 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.022 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.022 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.022 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.022 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.022 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.279 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.279 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.279 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.280 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.538 nvme0n1 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.538 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.797 nvme0n1 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.797 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.798 18:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.057 nvme0n1 00:29:49.057 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.057 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.057 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.057 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.057 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.057 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.315 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.573 nvme0n1 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.573 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.832 nvme0n1 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.832 18:25:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.399 nvme0n1 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.399 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.400 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:50.400 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.400 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.966 nvme0n1 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:50.966 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.967 18:25:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.532 nvme0n1 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:51.532 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:51.533 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.098 nvme0n1 00:29:52.098 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.098 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.098 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.098 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.098 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.099 18:25:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.665 nvme0n1 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:29:52.665 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.666 18:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.600 nvme0n1 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:29:53.600 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.601 18:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.535 nvme0n1 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:54.535 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:54.536 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:54.536 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:54.536 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:54.536 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:54.536 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.536 18:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.468 nvme0n1 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.468 18:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.399 nvme0n1 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.400 18:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.331 nvme0n1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.331 nvme0n1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.331 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.598 nvme0n1 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.598 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.855 nvme0n1 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.855 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.112 nvme0n1 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.112 18:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.112 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.368 nvme0n1 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:58.368 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.369 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.625 nvme0n1 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.625 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.881 nvme0n1 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.881 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.139 nvme0n1 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.139 18:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.139 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.397 nvme0n1 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:29:59.397 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.398 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.664 nvme0n1 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:29:59.664 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.665 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.923 nvme0n1 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:59.923 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.924 18:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.182 nvme0n1 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.182 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.183 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:00.183 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.183 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.440 nvme0n1 00:30:00.440 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.440 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.440 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.440 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.440 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.698 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.957 nvme0n1 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.957 18:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.216 nvme0n1 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.216 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.813 nvme0n1 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:01.813 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:01.814 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:01.814 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:01.814 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:01.814 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.814 18:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.406 nvme0n1 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.406 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.972 nvme0n1 00:30:02.972 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.972 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:02.972 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.972 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.973 18:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.539 nvme0n1 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:03.539 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.540 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.106 nvme0n1 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.106 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.107 18:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.040 nvme0n1 00:30:05.040 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.040 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.041 18:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.975 nvme0n1 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.975 18:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.909 nvme0n1 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.909 18:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.843 nvme0n1 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:07.843 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:07.844 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:07.844 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:07.844 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:07.844 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:07.844 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:07.844 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.844 18:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.410 nvme0n1 00:30:08.410 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.410 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.410 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.410 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.410 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.668 nvme0n1 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.668 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:08.927 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.928 nvme0n1 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.928 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.186 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.187 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.187 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.187 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:09.187 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.187 18:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.187 nvme0n1 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.187 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.446 nvme0n1 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.446 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.705 nvme0n1 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.705 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.964 nvme0n1 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.964 18:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.222 nvme0n1 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.222 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.223 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.481 nvme0n1 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.481 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.482 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.740 nvme0n1 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:10.740 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:10.741 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:10.741 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:10.741 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.741 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:10.999 nvme0n1 00:30:10.999 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.000 18:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 nvme0n1 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.259 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.517 nvme0n1 00:30:11.517 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.517 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:11.517 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.517 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.517 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:11.517 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.775 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.034 nvme0n1 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.034 18:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.293 nvme0n1 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.293 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.552 nvme0n1 00:30:12.552 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.552 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:12.552 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.552 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:12.552 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.552 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:12.810 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.811 18:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.376 nvme0n1 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.376 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.634 nvme0n1 00:30:13.634 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.634 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:13.891 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.892 18:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.459 nvme0n1 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.459 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.025 nvme0n1 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.025 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.026 18:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.592 nvme0n1 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTM0YmViMjI2NjZhYjlkZDc3ZDk3MDUzMGNiOWJiNDLqs3B1: 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: ]] 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDczZjAzOTFkZDJhNjU0ZmYxMmE1ZDE2YWU2YmVhOWExNzkxYjM1ZmE4N2U1NzA1ZDAwNTM3NGQzMzBhZDgzMMPD4pc=: 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:15.592 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.593 18:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.527 nvme0n1 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.527 18:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.461 nvme0n1 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:17.461 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.462 18:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.396 nvme0n1 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTk5NjExM2NmMThjNTA5ZGI1MGFiMjM2ZDBiMTBlZGY3N2ZiODFmNWI4NDNkZTY18wFQWQ==: 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjU4YzQ3MDVhM2RkNjdmMWViNTRhM2IwZTZhYjVlYTX/PCDy: 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:18.396 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:30:18.397 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.397 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.962 nvme0n1 00:30:18.962 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.220 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:19.220 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.220 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.220 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:19.220 18:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODlhOWFmNTM4MTg1ODA3MjJjYjBiNzgzNjM2MTJjMDRkY2M1NGU5NTU2NDZjNGY3N2VjZWFlYmY2OGEzYzY5ZilFwOs=: 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:30:19.220 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.221 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.155 nvme0n1 00:30:20.155 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.155 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.155 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.155 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.155 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:30:20.155 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.156 request: 00:30:20.156 { 00:30:20.156 "name": "nvme0", 00:30:20.156 "trtype": "tcp", 00:30:20.156 "traddr": "10.0.0.1", 00:30:20.156 "adrfam": "ipv4", 00:30:20.156 "trsvcid": "4420", 00:30:20.156 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:20.156 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:20.156 "prchk_reftag": false, 00:30:20.156 "prchk_guard": false, 00:30:20.156 "hdgst": false, 00:30:20.156 "ddgst": false, 00:30:20.156 "allow_unrecognized_csi": false, 00:30:20.156 "method": "bdev_nvme_attach_controller", 00:30:20.156 "req_id": 1 00:30:20.156 } 00:30:20.156 Got JSON-RPC error response 00:30:20.156 response: 00:30:20.156 { 00:30:20.156 "code": -5, 00:30:20.156 "message": "Input/output error" 00:30:20.156 } 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.156 18:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.156 request: 00:30:20.156 { 00:30:20.156 "name": "nvme0", 00:30:20.156 "trtype": "tcp", 00:30:20.156 "traddr": "10.0.0.1", 00:30:20.156 "adrfam": "ipv4", 00:30:20.156 "trsvcid": "4420", 00:30:20.156 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:20.156 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:20.156 "prchk_reftag": false, 00:30:20.156 "prchk_guard": false, 00:30:20.156 "hdgst": false, 00:30:20.156 "ddgst": false, 00:30:20.156 "dhchap_key": "key2", 00:30:20.156 "allow_unrecognized_csi": false, 00:30:20.156 "method": "bdev_nvme_attach_controller", 00:30:20.156 "req_id": 1 00:30:20.156 } 00:30:20.156 Got JSON-RPC error response 00:30:20.156 response: 00:30:20.156 { 00:30:20.156 "code": -5, 00:30:20.156 "message": "Input/output error" 00:30:20.156 } 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.156 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.414 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.414 request: 00:30:20.414 { 00:30:20.414 "name": "nvme0", 00:30:20.414 "trtype": "tcp", 00:30:20.414 "traddr": "10.0.0.1", 00:30:20.414 "adrfam": "ipv4", 00:30:20.414 "trsvcid": "4420", 00:30:20.414 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:30:20.414 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:30:20.414 "prchk_reftag": false, 00:30:20.414 "prchk_guard": false, 00:30:20.414 "hdgst": false, 00:30:20.414 "ddgst": false, 00:30:20.414 "dhchap_key": "key1", 00:30:20.415 "dhchap_ctrlr_key": "ckey2", 00:30:20.415 "allow_unrecognized_csi": false, 00:30:20.415 "method": "bdev_nvme_attach_controller", 00:30:20.415 "req_id": 1 00:30:20.415 } 00:30:20.415 Got JSON-RPC error response 00:30:20.415 response: 00:30:20.415 { 00:30:20.415 "code": -5, 00:30:20.415 "message": "Input/output error" 00:30:20.415 } 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.415 nvme0n1 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.415 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.673 request: 00:30:20.673 { 00:30:20.673 "name": "nvme0", 00:30:20.673 "dhchap_key": "key1", 00:30:20.673 "dhchap_ctrlr_key": "ckey2", 00:30:20.673 "method": "bdev_nvme_set_keys", 00:30:20.673 "req_id": 1 00:30:20.673 } 00:30:20.673 Got JSON-RPC error response 00:30:20.673 response: 00:30:20.673 { 00:30:20.673 "code": -13, 00:30:20.673 "message": "Permission denied" 00:30:20.673 } 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:20.673 18:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:22.045 18:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.045 18:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:22.045 18:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.045 18:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.045 18:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.045 18:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:30:22.045 18:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDRlZjc0YjExYTFjMjU0OWU3YTBiMjI2ZDcxZDNmMmJkNWQyNDE4OTRmNTJlZTljVfH+lw==: 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: ]] 00:30:22.979 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzdjNzQ3MjY2NjA4ZmE2Nzg5MjAwMDg5ZTdmZjBiNWQ0NWY1OTM1ZmZlY2Y2ZjJlmooPsQ==: 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.980 nvme0n1 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgyYTgyZGJiZGRkZjQ3Y2NlOWUxMTc1MDY3ODlkZjDdhGqt: 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: ]] 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQ3MDE1NGRkYTkzNjlkNDQwZDUxNTRjM2ZhMDU4Mjk78toz: 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.980 request: 00:30:22.980 { 00:30:22.980 "name": "nvme0", 00:30:22.980 "dhchap_key": "key2", 00:30:22.980 "dhchap_ctrlr_key": "ckey1", 00:30:22.980 "method": "bdev_nvme_set_keys", 00:30:22.980 "req_id": 1 00:30:22.980 } 00:30:22.980 Got JSON-RPC error response 00:30:22.980 response: 00:30:22.980 { 00:30:22.980 "code": -13, 00:30:22.980 "message": "Permission denied" 00:30:22.980 } 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:30:22.980 18:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:30:24.355 18:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:30:24.355 18:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:30:24.355 18:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.355 18:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.355 18:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:24.355 rmmod nvme_tcp 00:30:24.355 rmmod nvme_fabrics 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 704062 ']' 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 704062 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 704062 ']' 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 704062 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 704062 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 704062' 00:30:24.355 killing process with pid 704062 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 704062 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 704062 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.355 18:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:30:26.890 18:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:27.824 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:27.824 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:27.824 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:27.824 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:27.824 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:27.824 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:27.824 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:27.824 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:27.824 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:27.824 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:27.824 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:27.824 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:27.824 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:27.824 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:27.824 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:27.824 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:28.826 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:30:29.085 18:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.r8c /tmp/spdk.key-null.r2j /tmp/spdk.key-sha256.zYv /tmp/spdk.key-sha384.vZc /tmp/spdk.key-sha512.wtq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:30:29.085 18:26:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:30.020 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:30.020 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:30.020 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:30.020 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:30.020 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:30.020 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:30.020 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:30.020 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:30.020 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:30.020 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:30.020 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:30.020 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:30.020 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:30.020 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:30.020 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:30.020 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:30.020 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:30.278 00:30:30.278 real 0m51.081s 00:30:30.278 user 0m48.685s 00:30:30.278 sys 0m6.123s 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.278 ************************************ 00:30:30.278 END TEST nvmf_auth_host 00:30:30.278 ************************************ 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:30.278 ************************************ 00:30:30.278 START TEST nvmf_digest 00:30:30.278 ************************************ 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:30.278 * Looking for test storage... 00:30:30.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:30:30.278 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:30.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.537 --rc genhtml_branch_coverage=1 00:30:30.537 --rc genhtml_function_coverage=1 00:30:30.537 --rc genhtml_legend=1 00:30:30.537 --rc geninfo_all_blocks=1 00:30:30.537 --rc geninfo_unexecuted_blocks=1 00:30:30.537 00:30:30.537 ' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:30.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.537 --rc genhtml_branch_coverage=1 00:30:30.537 --rc genhtml_function_coverage=1 00:30:30.537 --rc genhtml_legend=1 00:30:30.537 --rc geninfo_all_blocks=1 00:30:30.537 --rc geninfo_unexecuted_blocks=1 00:30:30.537 00:30:30.537 ' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:30.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.537 --rc genhtml_branch_coverage=1 00:30:30.537 --rc genhtml_function_coverage=1 00:30:30.537 --rc genhtml_legend=1 00:30:30.537 --rc geninfo_all_blocks=1 00:30:30.537 --rc geninfo_unexecuted_blocks=1 00:30:30.537 00:30:30.537 ' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:30.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:30.537 --rc genhtml_branch_coverage=1 00:30:30.537 --rc genhtml_function_coverage=1 00:30:30.537 --rc genhtml_legend=1 00:30:30.537 --rc geninfo_all_blocks=1 00:30:30.537 --rc geninfo_unexecuted_blocks=1 00:30:30.537 00:30:30.537 ' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:30.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:30:30.537 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:30:30.538 18:26:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:33.068 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:33.068 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:33.068 Found net devices under 0000:09:00.0: cvl_0_0 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:33.068 Found net devices under 0000:09:00.1: cvl_0_1 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:33.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:30:33.068 00:30:33.068 --- 10.0.0.2 ping statistics --- 00:30:33.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.068 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:30:33.068 00:30:33.068 --- 10.0.0.1 ping statistics --- 00:30:33.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.068 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.068 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:33.069 ************************************ 00:30:33.069 START TEST nvmf_digest_clean 00:30:33.069 ************************************ 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=713683 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 713683 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 713683 ']' 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.069 18:26:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:33.069 [2024-11-26 18:26:20.808752] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:33.069 [2024-11-26 18:26:20.808843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.069 [2024-11-26 18:26:20.886684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.069 [2024-11-26 18:26:20.945893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:33.069 [2024-11-26 18:26:20.945956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:33.069 [2024-11-26 18:26:20.945984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:33.069 [2024-11-26 18:26:20.945995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:33.069 [2024-11-26 18:26:20.946005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:33.069 [2024-11-26 18:26:20.946633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.069 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:33.327 null0 00:30:33.327 [2024-11-26 18:26:21.185056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.327 [2024-11-26 18:26:21.209276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=713711 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 713711 /var/tmp/bperf.sock 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 713711 ']' 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:33.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:33.327 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:33.327 [2024-11-26 18:26:21.261961] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:33.327 [2024-11-26 18:26:21.262039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid713711 ] 00:30:33.327 [2024-11-26 18:26:21.335410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.585 [2024-11-26 18:26:21.396034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.585 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.585 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:33.585 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:33.585 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:33.585 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:34.151 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.151 18:26:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:34.409 nvme0n1 00:30:34.409 18:26:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:34.409 18:26:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:34.409 Running I/O for 2 seconds... 00:30:36.715 18122.00 IOPS, 70.79 MiB/s [2024-11-26T17:26:24.726Z] 18368.50 IOPS, 71.75 MiB/s 00:30:36.715 Latency(us) 00:30:36.715 [2024-11-26T17:26:24.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.715 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:36.715 nvme0n1 : 2.00 18387.31 71.83 0.00 0.00 6952.60 3422.44 14951.92 00:30:36.715 [2024-11-26T17:26:24.726Z] =================================================================================================================== 00:30:36.715 [2024-11-26T17:26:24.726Z] Total : 18387.31 71.83 0.00 0.00 6952.60 3422.44 14951.92 00:30:36.715 { 00:30:36.715 "results": [ 00:30:36.715 { 00:30:36.715 "job": "nvme0n1", 00:30:36.715 "core_mask": "0x2", 00:30:36.715 "workload": "randread", 00:30:36.715 "status": "finished", 00:30:36.715 "queue_depth": 128, 00:30:36.715 "io_size": 4096, 00:30:36.715 "runtime": 2.004915, 00:30:36.715 "iops": 18387.313177865395, 00:30:36.715 "mibps": 71.8254421010367, 00:30:36.715 "io_failed": 0, 00:30:36.715 "io_timeout": 0, 00:30:36.715 "avg_latency_us": 6952.600863651663, 00:30:36.715 "min_latency_us": 3422.4355555555558, 00:30:36.715 "max_latency_us": 14951.917037037038 00:30:36.715 } 00:30:36.715 ], 00:30:36.715 "core_count": 1 00:30:36.715 } 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:36.715 | select(.opcode=="crc32c") 00:30:36.715 | "\(.module_name) \(.executed)"' 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 713711 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 713711 ']' 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 713711 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.715 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713711 00:30:36.973 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:36.973 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:36.973 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713711' 00:30:36.973 killing process with pid 713711 00:30:36.973 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 713711 00:30:36.973 Received shutdown signal, test time was about 2.000000 seconds 00:30:36.973 00:30:36.973 Latency(us) 00:30:36.973 [2024-11-26T17:26:24.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.973 [2024-11-26T17:26:24.984Z] =================================================================================================================== 00:30:36.973 [2024-11-26T17:26:24.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:36.973 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 713711 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=714237 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 714237 /var/tmp/bperf.sock 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 714237 ']' 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:37.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.231 18:26:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:37.231 [2024-11-26 18:26:25.033733] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:37.231 [2024-11-26 18:26:25.033819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714237 ] 00:30:37.231 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:37.231 Zero copy mechanism will not be used. 00:30:37.231 [2024-11-26 18:26:25.099441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.231 [2024-11-26 18:26:25.157887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.489 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.489 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:37.489 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:37.489 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:37.489 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:37.747 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:37.747 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:38.005 nvme0n1 00:30:38.005 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:38.005 18:26:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:38.263 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:38.263 Zero copy mechanism will not be used. 00:30:38.263 Running I/O for 2 seconds... 00:30:40.127 5983.00 IOPS, 747.88 MiB/s [2024-11-26T17:26:28.138Z] 5794.50 IOPS, 724.31 MiB/s 00:30:40.127 Latency(us) 00:30:40.127 [2024-11-26T17:26:28.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.127 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:40.127 nvme0n1 : 2.00 5790.90 723.86 0.00 0.00 2758.72 716.04 4733.16 00:30:40.127 [2024-11-26T17:26:28.138Z] =================================================================================================================== 00:30:40.127 [2024-11-26T17:26:28.138Z] Total : 5790.90 723.86 0.00 0.00 2758.72 716.04 4733.16 00:30:40.127 { 00:30:40.127 "results": [ 00:30:40.127 { 00:30:40.127 "job": "nvme0n1", 00:30:40.127 "core_mask": "0x2", 00:30:40.127 "workload": "randread", 00:30:40.127 "status": "finished", 00:30:40.127 "queue_depth": 16, 00:30:40.127 "io_size": 131072, 00:30:40.127 "runtime": 2.00435, 00:30:40.127 "iops": 5790.904782098934, 00:30:40.127 "mibps": 723.8630977623668, 00:30:40.127 "io_failed": 0, 00:30:40.127 "io_timeout": 0, 00:30:40.127 "avg_latency_us": 2758.7164945802183, 00:30:40.127 "min_latency_us": 716.0414814814815, 00:30:40.127 "max_latency_us": 4733.155555555555 00:30:40.127 } 00:30:40.127 ], 00:30:40.127 "core_count": 1 00:30:40.127 } 00:30:40.127 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:40.127 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:40.127 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:40.127 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:40.127 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:40.127 | select(.opcode=="crc32c") 00:30:40.128 | "\(.module_name) \(.executed)"' 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 714237 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 714237 ']' 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 714237 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 714237 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 714237' 00:30:40.693 killing process with pid 714237 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 714237 00:30:40.693 Received shutdown signal, test time was about 2.000000 seconds 00:30:40.693 00:30:40.693 Latency(us) 00:30:40.693 [2024-11-26T17:26:28.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.693 [2024-11-26T17:26:28.704Z] =================================================================================================================== 00:30:40.693 [2024-11-26T17:26:28.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:40.693 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 714237 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=714637 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 714637 /var/tmp/bperf.sock 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 714637 ']' 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:40.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.952 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:40.952 [2024-11-26 18:26:28.751394] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:40.952 [2024-11-26 18:26:28.751480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714637 ] 00:30:40.952 [2024-11-26 18:26:28.818525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.952 [2024-11-26 18:26:28.875240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.210 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.210 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:41.210 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:41.210 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:41.210 18:26:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:41.467 18:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.467 18:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:41.724 nvme0n1 00:30:41.724 18:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:41.724 18:26:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:41.981 Running I/O for 2 seconds... 00:30:43.847 19370.00 IOPS, 75.66 MiB/s [2024-11-26T17:26:31.858Z] 18893.00 IOPS, 73.80 MiB/s 00:30:43.847 Latency(us) 00:30:43.847 [2024-11-26T17:26:31.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.847 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.847 nvme0n1 : 2.01 18895.89 73.81 0.00 0.00 6758.46 2827.76 16699.54 00:30:43.847 [2024-11-26T17:26:31.858Z] =================================================================================================================== 00:30:43.847 [2024-11-26T17:26:31.858Z] Total : 18895.89 73.81 0.00 0.00 6758.46 2827.76 16699.54 00:30:43.847 { 00:30:43.847 "results": [ 00:30:43.847 { 00:30:43.847 "job": "nvme0n1", 00:30:43.847 "core_mask": "0x2", 00:30:43.847 "workload": "randwrite", 00:30:43.847 "status": "finished", 00:30:43.847 "queue_depth": 128, 00:30:43.847 "io_size": 4096, 00:30:43.847 "runtime": 2.008162, 00:30:43.847 "iops": 18895.8858896842, 00:30:43.847 "mibps": 73.8120542565789, 00:30:43.847 "io_failed": 0, 00:30:43.847 "io_timeout": 0, 00:30:43.847 "avg_latency_us": 6758.457169740234, 00:30:43.847 "min_latency_us": 2827.757037037037, 00:30:43.847 "max_latency_us": 16699.543703703705 00:30:43.847 } 00:30:43.847 ], 00:30:43.847 "core_count": 1 00:30:43.847 } 00:30:43.847 18:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:43.847 18:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:43.847 18:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:43.847 18:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:43.847 | select(.opcode=="crc32c") 00:30:43.847 | "\(.module_name) \(.executed)"' 00:30:43.847 18:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 714637 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 714637 ']' 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 714637 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 714637 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:44.105 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 714637' 00:30:44.364 killing process with pid 714637 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 714637 00:30:44.364 Received shutdown signal, test time was about 2.000000 seconds 00:30:44.364 00:30:44.364 Latency(us) 00:30:44.364 [2024-11-26T17:26:32.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:44.364 [2024-11-26T17:26:32.375Z] =================================================================================================================== 00:30:44.364 [2024-11-26T17:26:32.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 714637 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=715049 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 715049 /var/tmp/bperf.sock 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 715049 ']' 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:44.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.364 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:44.623 [2024-11-26 18:26:32.388122] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:44.623 [2024-11-26 18:26:32.388205] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715049 ] 00:30:44.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:44.623 Zero copy mechanism will not be used. 00:30:44.623 [2024-11-26 18:26:32.453031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.623 [2024-11-26 18:26:32.509451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.623 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.623 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:30:44.623 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:30:44.623 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:30:44.623 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:45.191 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.191 18:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:45.756 nvme0n1 00:30:45.756 18:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:30:45.756 18:26:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:45.756 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:45.756 Zero copy mechanism will not be used. 00:30:45.756 Running I/O for 2 seconds... 00:30:47.616 6187.00 IOPS, 773.38 MiB/s [2024-11-26T17:26:35.627Z] 6061.00 IOPS, 757.62 MiB/s 00:30:47.616 Latency(us) 00:30:47.616 [2024-11-26T17:26:35.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:47.616 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:47.616 nvme0n1 : 2.00 6057.54 757.19 0.00 0.00 2633.14 1929.67 12330.48 00:30:47.616 [2024-11-26T17:26:35.627Z] =================================================================================================================== 00:30:47.616 [2024-11-26T17:26:35.627Z] Total : 6057.54 757.19 0.00 0.00 2633.14 1929.67 12330.48 00:30:47.616 { 00:30:47.616 "results": [ 00:30:47.616 { 00:30:47.616 "job": "nvme0n1", 00:30:47.616 "core_mask": "0x2", 00:30:47.616 "workload": "randwrite", 00:30:47.616 "status": "finished", 00:30:47.616 "queue_depth": 16, 00:30:47.616 "io_size": 131072, 00:30:47.616 "runtime": 2.00461, 00:30:47.616 "iops": 6057.537376347519, 00:30:47.616 "mibps": 757.1921720434399, 00:30:47.616 "io_failed": 0, 00:30:47.616 "io_timeout": 0, 00:30:47.616 "avg_latency_us": 2633.1445347876083, 00:30:47.616 "min_latency_us": 1929.671111111111, 00:30:47.616 "max_latency_us": 12330.477037037037 00:30:47.616 } 00:30:47.616 ], 00:30:47.616 "core_count": 1 00:30:47.616 } 00:30:47.616 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:30:47.616 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:30:47.616 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:47.616 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:47.616 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:47.616 | select(.opcode=="crc32c") 00:30:47.616 | "\(.module_name) \(.executed)"' 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 715049 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 715049 ']' 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 715049 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.873 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 715049 00:30:48.130 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:48.130 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:48.130 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 715049' 00:30:48.130 killing process with pid 715049 00:30:48.130 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 715049 00:30:48.130 Received shutdown signal, test time was about 2.000000 seconds 00:30:48.130 00:30:48.130 Latency(us) 00:30:48.130 [2024-11-26T17:26:36.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.130 [2024-11-26T17:26:36.141Z] =================================================================================================================== 00:30:48.130 [2024-11-26T17:26:36.141Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.130 18:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 715049 00:30:48.130 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 713683 00:30:48.130 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 713683 ']' 00:30:48.130 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 713683 00:30:48.130 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:30:48.388 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:48.388 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 713683 00:30:48.388 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:48.388 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:48.388 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 713683' 00:30:48.388 killing process with pid 713683 00:30:48.388 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 713683 00:30:48.388 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 713683 00:30:48.646 00:30:48.646 real 0m15.657s 00:30:48.646 user 0m31.374s 00:30:48.646 sys 0m4.305s 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:30:48.646 ************************************ 00:30:48.646 END TEST nvmf_digest_clean 00:30:48.646 ************************************ 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:48.646 ************************************ 00:30:48.646 START TEST nvmf_digest_error 00:30:48.646 ************************************ 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=715608 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 715608 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 715608 ']' 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:48.646 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:48.646 [2024-11-26 18:26:36.518425] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:48.646 [2024-11-26 18:26:36.518509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:48.646 [2024-11-26 18:26:36.589300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.646 [2024-11-26 18:26:36.645259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:48.646 [2024-11-26 18:26:36.645333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:48.646 [2024-11-26 18:26:36.645349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:48.646 [2024-11-26 18:26:36.645361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:48.646 [2024-11-26 18:26:36.645371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:48.646 [2024-11-26 18:26:36.645983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:48.904 [2024-11-26 18:26:36.766720] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.904 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:48.904 null0 00:30:48.904 [2024-11-26 18:26:36.891822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.162 [2024-11-26 18:26:36.916087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=715632 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 715632 /var/tmp/bperf.sock 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 715632 ']' 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:49.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:49.162 18:26:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:49.163 [2024-11-26 18:26:36.970609] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:49.163 [2024-11-26 18:26:36.970704] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715632 ] 00:30:49.163 [2024-11-26 18:26:37.046802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.163 [2024-11-26 18:26:37.110784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.420 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:49.420 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:49.420 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:49.420 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:49.678 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:49.678 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.678 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:49.678 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.678 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:49.678 18:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:50.242 nvme0n1 00:30:50.242 18:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:50.242 18:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.242 18:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:50.242 18:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.242 18:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:50.243 18:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:50.243 Running I/O for 2 seconds... 00:30:50.243 [2024-11-26 18:26:38.162849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.243 [2024-11-26 18:26:38.162910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.243 [2024-11-26 18:26:38.162930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.243 [2024-11-26 18:26:38.180360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.243 [2024-11-26 18:26:38.180393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.243 [2024-11-26 18:26:38.180435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.243 [2024-11-26 18:26:38.196427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.243 [2024-11-26 18:26:38.196457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.243 [2024-11-26 18:26:38.196489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.243 [2024-11-26 18:26:38.210731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.243 [2024-11-26 18:26:38.210762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.243 [2024-11-26 18:26:38.210779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.243 [2024-11-26 18:26:38.225703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.243 [2024-11-26 18:26:38.225732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.243 [2024-11-26 18:26:38.225764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.243 [2024-11-26 18:26:38.237224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.243 [2024-11-26 18:26:38.237254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.243 [2024-11-26 18:26:38.237272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.243 [2024-11-26 18:26:38.250670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.243 [2024-11-26 18:26:38.250701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.243 [2024-11-26 18:26:38.250726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.267065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.267093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.267124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.279954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.279996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.280014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.294471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.294501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.294535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.307536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.307568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.307586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.320271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.320325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.320345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.334702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.334734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.334751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.349627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.349659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.349677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.360996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.361024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.361055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.377179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.377216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.377233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.392051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.392083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.392100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.406708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.406740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.406757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.422687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.422719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.422736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.438733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.438764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.438781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.450105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.450134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.450166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.466539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.466568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.466601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.481419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.501 [2024-11-26 18:26:38.481450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.501 [2024-11-26 18:26:38.481467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.501 [2024-11-26 18:26:38.494641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.502 [2024-11-26 18:26:38.494671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.502 [2024-11-26 18:26:38.494689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.502 [2024-11-26 18:26:38.507092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.502 [2024-11-26 18:26:38.507123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.502 [2024-11-26 18:26:38.507140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.520360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.520391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.520408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.531704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.531735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.531752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.545577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.545622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.545639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.561066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.561096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.561114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.574363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.574393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.574410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.586563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.586593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.586610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.600235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.600266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.600298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.613133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.613161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.613201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.626452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.626483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.626500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.637771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.637816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.637833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.651431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.651460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.651491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.667608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.667636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.667666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.681968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.681998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.682031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.693258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.693308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.693328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.707658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.707685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.707717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.723480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.723511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.760 [2024-11-26 18:26:38.723528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.760 [2024-11-26 18:26:38.736067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.760 [2024-11-26 18:26:38.736098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.761 [2024-11-26 18:26:38.736115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.761 [2024-11-26 18:26:38.749766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.761 [2024-11-26 18:26:38.749811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.761 [2024-11-26 18:26:38.749828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.761 [2024-11-26 18:26:38.762515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:50.761 [2024-11-26 18:26:38.762560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.761 [2024-11-26 18:26:38.762578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.775863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.775891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.775908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.787100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.787128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.787159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.802655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.802683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.802713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.815426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.815456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.815488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.828323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.828352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.828385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.842452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.842481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.842520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.854468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.854499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.854516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.867124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.867154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.867170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.879840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.879870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.879887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.894344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.894372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.894405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.905230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.905259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.905277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.920949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.920977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.921009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.936198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.936242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.936258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.951871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.951901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.951917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.965885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.965927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.965946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.977731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.977758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.977789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:38.991068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:38.991096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:38.991127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:39.004199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:39.004227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:39.004258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.019 [2024-11-26 18:26:39.018045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.019 [2024-11-26 18:26:39.018073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.019 [2024-11-26 18:26:39.018105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.277 [2024-11-26 18:26:39.034336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.277 [2024-11-26 18:26:39.034368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.277 [2024-11-26 18:26:39.034386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.277 [2024-11-26 18:26:39.048766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.277 [2024-11-26 18:26:39.048796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.277 [2024-11-26 18:26:39.048814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.277 [2024-11-26 18:26:39.065186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.277 [2024-11-26 18:26:39.065232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.277 [2024-11-26 18:26:39.065251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.077449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.077477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.077509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.090784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.090815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.090832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.105838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.105866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.105897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.122118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.122147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.122164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.136667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.136698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.136716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.149723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.149751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.149782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 18323.00 IOPS, 71.57 MiB/s [2024-11-26T17:26:39.289Z] [2024-11-26 18:26:39.165274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.165310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.165330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.180871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.180915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.180932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.195590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.195635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.195652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.207067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.207094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.207130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.220572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.220615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.220631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.234682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.234712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.234728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.246200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.246227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.246256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.259482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.278 [2024-11-26 18:26:39.259509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.278 [2024-11-26 18:26:39.259540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.278 [2024-11-26 18:26:39.273875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.279 [2024-11-26 18:26:39.273904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.279 [2024-11-26 18:26:39.273920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.290162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.537 [2024-11-26 18:26:39.290189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.537 [2024-11-26 18:26:39.290219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.304910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.537 [2024-11-26 18:26:39.304940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.537 [2024-11-26 18:26:39.304956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.317260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.537 [2024-11-26 18:26:39.317309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.537 [2024-11-26 18:26:39.317328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.330341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.537 [2024-11-26 18:26:39.330383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.537 [2024-11-26 18:26:39.330399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.342949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.537 [2024-11-26 18:26:39.342976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.537 [2024-11-26 18:26:39.343007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.357113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.537 [2024-11-26 18:26:39.357141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.537 [2024-11-26 18:26:39.357171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.372592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.537 [2024-11-26 18:26:39.372634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.537 [2024-11-26 18:26:39.372650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.388773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.537 [2024-11-26 18:26:39.388801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.537 [2024-11-26 18:26:39.388832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.537 [2024-11-26 18:26:39.401772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.401799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.401829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.414276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.414325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.414344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.427813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.427840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.427871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.441763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.441793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.441819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.457442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.457472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.457504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.473487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.473518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.473537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.486884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.486914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.486932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.502726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.502756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.502773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.514229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.514258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.514290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.529956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.529985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.530002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.538 [2024-11-26 18:26:39.545946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.538 [2024-11-26 18:26:39.545976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.538 [2024-11-26 18:26:39.546007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.562339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.562390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.562408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.573123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.573157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.573189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.588490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.588521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.588554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.605155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.605183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.605214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.618473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.618504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.618538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.632638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.632680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.632696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.648518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.648549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.648566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.665207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.665234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.665265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.678592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.678635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.678651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.690918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.690946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.690976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.704790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.704818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.704849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.719736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.719762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.719792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.736455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.736484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.736514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.749157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.749185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.749216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.763366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.763394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.763425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.777635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.777680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.777697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.797 [2024-11-26 18:26:39.794199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.797 [2024-11-26 18:26:39.794229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.797 [2024-11-26 18:26:39.794246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:51.798 [2024-11-26 18:26:39.806201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:51.798 [2024-11-26 18:26:39.806249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.806266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.822212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.822239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.822276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.838186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.838216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.838232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.852366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.852397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.852414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.868425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.868455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.868473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.879388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.879415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.879446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.892094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.892123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.892156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.905994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.906022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.906053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.920764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.920791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.920821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.932439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.932467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.932499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.948415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.948444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.948476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.964581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.964624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.964640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.980685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.980716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.980733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:39.991464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:39.991492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:39.991524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:40.007167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:40.007204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:40.007225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:40.020210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.055 [2024-11-26 18:26:40.020245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.055 [2024-11-26 18:26:40.020262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.055 [2024-11-26 18:26:40.033744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.056 [2024-11-26 18:26:40.033785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.056 [2024-11-26 18:26:40.033805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.056 [2024-11-26 18:26:40.050640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.056 [2024-11-26 18:26:40.050683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.056 [2024-11-26 18:26:40.050717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.313 [2024-11-26 18:26:40.066439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.313 [2024-11-26 18:26:40.066475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.313 [2024-11-26 18:26:40.066504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.313 [2024-11-26 18:26:40.077534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.313 [2024-11-26 18:26:40.077564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.313 [2024-11-26 18:26:40.077581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.313 [2024-11-26 18:26:40.094316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.313 [2024-11-26 18:26:40.094362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.313 [2024-11-26 18:26:40.094382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.313 [2024-11-26 18:26:40.112044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.313 [2024-11-26 18:26:40.112073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.313 [2024-11-26 18:26:40.112104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.313 [2024-11-26 18:26:40.125478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.313 [2024-11-26 18:26:40.125509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.313 [2024-11-26 18:26:40.125527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.313 [2024-11-26 18:26:40.137810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.313 [2024-11-26 18:26:40.137854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.313 [2024-11-26 18:26:40.137873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.313 18044.00 IOPS, 70.48 MiB/s [2024-11-26T17:26:40.324Z] [2024-11-26 18:26:40.153870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207f880) 00:30:52.313 [2024-11-26 18:26:40.153898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:52.313 [2024-11-26 18:26:40.153913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:52.313 00:30:52.313 Latency(us) 00:30:52.313 [2024-11-26T17:26:40.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.313 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:52.313 nvme0n1 : 2.00 18064.12 70.56 0.00 0.00 7078.06 3422.44 22330.79 00:30:52.313 [2024-11-26T17:26:40.324Z] =================================================================================================================== 00:30:52.313 [2024-11-26T17:26:40.324Z] Total : 18064.12 70.56 0.00 0.00 7078.06 3422.44 22330.79 00:30:52.313 { 00:30:52.313 "results": [ 00:30:52.313 { 00:30:52.313 "job": "nvme0n1", 00:30:52.313 "core_mask": "0x2", 00:30:52.313 "workload": "randread", 00:30:52.313 "status": "finished", 00:30:52.313 "queue_depth": 128, 00:30:52.313 "io_size": 4096, 00:30:52.313 "runtime": 2.004858, 00:30:52.313 "iops": 18064.122247061885, 00:30:52.313 "mibps": 70.56297752758549, 00:30:52.313 "io_failed": 0, 00:30:52.313 "io_timeout": 0, 00:30:52.313 "avg_latency_us": 7078.0578330838025, 00:30:52.313 "min_latency_us": 3422.4355555555558, 00:30:52.313 "max_latency_us": 22330.785185185185 00:30:52.313 } 00:30:52.313 ], 00:30:52.313 "core_count": 1 00:30:52.313 } 00:30:52.314 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:52.314 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:52.314 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:52.314 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:52.314 | .driver_specific 00:30:52.314 | .nvme_error 00:30:52.314 | .status_code 00:30:52.314 | .command_transient_transport_error' 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 715632 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 715632 ']' 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 715632 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 715632 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 715632' 00:30:52.571 killing process with pid 715632 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 715632 00:30:52.571 Received shutdown signal, test time was about 2.000000 seconds 00:30:52.571 00:30:52.571 Latency(us) 00:30:52.571 [2024-11-26T17:26:40.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.571 [2024-11-26T17:26:40.582Z] =================================================================================================================== 00:30:52.571 [2024-11-26T17:26:40.582Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:52.571 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 715632 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=716157 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 716157 /var/tmp/bperf.sock 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 716157 ']' 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.829 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:52.829 [2024-11-26 18:26:40.748057] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:52.829 [2024-11-26 18:26:40.748132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716157 ] 00:30:52.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:52.829 Zero copy mechanism will not be used. 00:30:52.829 [2024-11-26 18:26:40.816681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.087 [2024-11-26 18:26:40.876092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.087 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.087 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:53.087 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:53.087 18:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:53.345 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:53.345 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.345 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:53.345 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.345 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:53.345 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:53.920 nvme0n1 00:30:53.920 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:53.920 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.920 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:53.920 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.920 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:53.920 18:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:53.920 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:53.920 Zero copy mechanism will not be used. 00:30:53.920 Running I/O for 2 seconds... 00:30:53.920 [2024-11-26 18:26:41.886905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:53.920 [2024-11-26 18:26:41.886968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-11-26 18:26:41.886988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:53.920 [2024-11-26 18:26:41.891631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:53.920 [2024-11-26 18:26:41.891672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-11-26 18:26:41.891691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:53.920 [2024-11-26 18:26:41.896291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:53.920 [2024-11-26 18:26:41.896351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-11-26 18:26:41.896369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:53.920 [2024-11-26 18:26:41.900942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:53.920 [2024-11-26 18:26:41.900972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-11-26 18:26:41.900988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:53.920 [2024-11-26 18:26:41.905529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:53.920 [2024-11-26 18:26:41.905561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-11-26 18:26:41.905593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:53.920 [2024-11-26 18:26:41.910275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:53.920 [2024-11-26 18:26:41.910327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.920 [2024-11-26 18:26:41.910359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:53.920 [2024-11-26 18:26:41.915093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:53.920 [2024-11-26 18:26:41.915123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.921 [2024-11-26 18:26:41.915154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:53.921 [2024-11-26 18:26:41.920262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:53.921 [2024-11-26 18:26:41.920295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:53.921 [2024-11-26 18:26:41.920324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.925084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.925116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.925134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.929654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.929686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.929703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.934219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.934250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.934267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.938919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.938950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.938967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.943832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.943864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.943896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.949470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.949501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.949517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.954860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.954891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.954909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.961713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.961743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.961774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.968116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.968148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.968166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.974750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.974784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.974801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.980864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.980896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.980924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.985868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.229 [2024-11-26 18:26:41.985899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.229 [2024-11-26 18:26:41.985916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.229 [2024-11-26 18:26:41.990498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:41.990528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:41.990545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:41.995268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:41.995317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:41.995336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:41.999978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.000009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.000026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.004755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.004786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.004803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.009872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.009904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.009921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.013116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.013162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.013178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.018710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.018742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.018760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.024118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.024162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.024180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.029815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.029846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.029862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.035470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.035516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.035533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.041491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.041522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.041539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.047004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.047050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.047067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.053397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.053429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.053446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.060347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.060380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.060397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.068586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.068618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.068636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.075783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.075814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.075837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.082196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.082228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.082245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.087870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.087921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.087938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.093136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.093167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.093185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.099967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.099999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.100016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.105126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.105176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.105194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.230 [2024-11-26 18:26:42.109838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.230 [2024-11-26 18:26:42.109869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.230 [2024-11-26 18:26:42.109886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.115420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.115451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.115469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.122333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.122364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.122381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.129337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.129374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.129393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.135054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.135087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.135105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.141312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.141343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.141360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.146932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.146965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.146982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.152386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.152417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.152434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.158256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.158287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.158312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.164641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.164688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.164705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.172316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.172348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.172366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.179481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.179513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.179531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.187369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.187401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.187419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.195012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.195043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.195073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.202209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.202254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.202271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.208824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.208870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.208887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.216779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.216811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.216827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.231 [2024-11-26 18:26:42.224641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.231 [2024-11-26 18:26:42.224674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.231 [2024-11-26 18:26:42.224691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.231100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.231133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.231150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.236162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.236193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.236211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.240687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.240718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.240741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.245264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.245294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.245320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.249770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.249800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.249816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.254177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.254206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.254223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.258617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.258647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.258664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.263088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.263131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.263147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.267539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.267568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.267599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.272136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.272181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.272198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.276743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.276773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.276789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.281329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.281365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.281382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.285796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.285827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.285844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.290338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.290368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.290385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.294794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.294824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.490 [2024-11-26 18:26:42.294841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.490 [2024-11-26 18:26:42.299249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.490 [2024-11-26 18:26:42.299279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.299295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.303666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.303697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.303714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.308207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.308237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.308254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.312569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.312599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.312616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.317313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.317343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.317365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.321814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.321844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.321860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.326238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.326267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.326284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.330735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.330765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.330781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.335212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.335240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.335272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.339706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.339750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.339767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.344316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.344346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.344363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.348859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.348903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.348920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.353532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.353562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.353578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.358665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.358701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.358734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.364709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.364741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.364758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.370457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.370489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.370506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.375565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.375596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.375612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.380748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.380779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.380796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.385955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.385987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.386004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.391367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.391399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.391416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.396508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.396539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.396557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.401815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.401846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.401864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.407539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.407571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.407588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.413349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.413380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.413398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.420616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.420649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.420667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.427801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.427833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.427850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.435053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.435085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.435102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.491 [2024-11-26 18:26:42.441793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.491 [2024-11-26 18:26:42.441825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.491 [2024-11-26 18:26:42.441843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.448530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.448563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.448581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.454809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.454841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.454858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.461640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.461672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.461696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.467737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.467769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.467786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.472574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.472604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.472621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.477870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.477901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.477918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.483230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.483262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.483279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.488212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.488243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.488260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.493151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.493182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.493198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.492 [2024-11-26 18:26:42.497703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.492 [2024-11-26 18:26:42.497733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.492 [2024-11-26 18:26:42.497751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.502274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.502311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.502330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.506837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.506874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.506891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.511405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.511434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.511451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.515978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.516008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.516024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.520368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.520397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.520413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.524850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.524880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.524897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.529568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.529599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.529616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.534860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.534892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.534909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.539748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.539778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.539796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.544898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.544929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.544947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.550353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.550385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.550402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.751 [2024-11-26 18:26:42.555572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.751 [2024-11-26 18:26:42.555603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.751 [2024-11-26 18:26:42.555620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.560771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.560802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.560820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.565957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.565988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.566005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.571172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.571203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.571220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.576470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.576501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.576519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.581675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.581706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.581724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.586734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.586765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.586782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.591832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.591863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.591887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.596997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.597027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.597044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.602971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.603001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.603019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.608251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.608282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.608298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.612888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.612918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.612935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.617506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.617537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.617553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.623216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.623246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.623263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.628877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.628908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.628925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.635329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.635360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.635377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.642878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.642911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.642929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.649372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.649404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.649421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.656980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.657012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.657029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.660833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.660864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.660881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.666149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.666179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.666197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.670863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.670894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.670911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.675414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.675445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.675462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.680537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.680569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.680586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.685978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.686009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.686038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.691571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.691602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.691620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.697426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.697458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.697476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.752 [2024-11-26 18:26:42.702840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.752 [2024-11-26 18:26:42.702886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.752 [2024-11-26 18:26:42.702904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.708088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.708119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.708137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.713119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.713151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.713168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.717635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.717666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.717683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.722729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.722760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.722777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.728189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.728220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.728237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.734437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.734475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.734492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.739885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.739916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.739934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.745226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.745257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.745275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.750276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.750314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.750333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:54.753 [2024-11-26 18:26:42.755779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:54.753 [2024-11-26 18:26:42.755811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:54.753 [2024-11-26 18:26:42.755829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.011 [2024-11-26 18:26:42.761464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.011 [2024-11-26 18:26:42.761495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.011 [2024-11-26 18:26:42.761512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.011 [2024-11-26 18:26:42.767392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.011 [2024-11-26 18:26:42.767424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.767442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.772877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.772908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.772924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.777288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.777325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.777343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.782453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.782484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.782501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.787815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.787847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.787864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.793101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.793133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.793150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.798691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.798722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.798739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.803916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.803947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.803964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.809221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.809252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.809269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.814241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.814272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.814289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.819283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.819335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.819354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.824036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.824067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.824091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.828584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.828614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.828631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.833127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.833159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.833176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.837710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.837741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.837758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.842494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.842524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.842541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.847696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.847727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.847744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.853474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.853506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.853523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.859296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.859336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.859365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.864668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.864699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.864716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.870475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.870507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.870524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.012 5695.00 IOPS, 711.88 MiB/s [2024-11-26T17:26:43.023Z] [2024-11-26 18:26:42.878548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.878580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.878597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.886188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.886220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.886238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.891958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.891990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.892007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.897665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.897697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.897714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.902645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.902678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.012 [2024-11-26 18:26:42.902695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.012 [2024-11-26 18:26:42.907409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.012 [2024-11-26 18:26:42.907441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.907457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.912031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.912061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.912078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.916885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.916916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.916939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.922763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.922795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.922811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.930373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.930407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.930425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.936481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.936513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.936531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.942346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.942378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.942395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.948127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.948174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.948192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.953608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.953640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.953657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.958379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.958409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.958426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.963055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.963097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.963129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.967688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.967726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.972437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.972468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.972484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.978092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.978138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.978156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.985246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.985277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.985294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.992342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.992373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.992391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:42.999118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:42.999149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:42.999167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:43.006828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:43.006861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:43.006879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.013 [2024-11-26 18:26:43.014819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.013 [2024-11-26 18:26:43.014852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.013 [2024-11-26 18:26:43.014869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.022920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.022953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.022971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.030437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.030469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.030486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.037966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.037998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.038015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.045576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.045608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.045625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.053067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.053098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.053115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.060680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.060711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.060729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.068488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.068524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.068541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.076541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.076572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.076590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.084429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.084461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.084478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.092119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.092151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.092189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.099896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.099941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.099957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.107703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.107733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.107749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.115231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.115263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.115280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.121524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.121554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.121571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.127620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.127651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.127668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.131517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.273 [2024-11-26 18:26:43.131548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.273 [2024-11-26 18:26:43.131565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.273 [2024-11-26 18:26:43.136785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.136829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.136845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.141746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.141791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.141807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.146538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.146569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.146602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.151545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.151576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.151608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.157670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.157702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.157720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.163011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.163042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.163059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.168899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.168930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.168946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.174154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.174186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.174203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.177971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.178016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.178034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.182242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.182273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.182290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.187255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.187286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.187332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.192449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.192480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.192497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.197527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.197558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.197575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.202346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.202377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.202394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.207223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.207267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.207283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.211933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.211977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.211993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.216693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.216750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.216766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.221435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.221466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.221483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.226080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.226110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.226126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.230619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.230655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.230686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.235284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.235323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.235340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.239780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.239810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.239826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.244298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.244335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.244352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.248822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.248867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.248883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.253335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.253382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.258042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.258072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.258089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.263167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.274 [2024-11-26 18:26:43.263198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.274 [2024-11-26 18:26:43.263215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.274 [2024-11-26 18:26:43.267734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.275 [2024-11-26 18:26:43.267764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.275 [2024-11-26 18:26:43.267780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.275 [2024-11-26 18:26:43.272382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.275 [2024-11-26 18:26:43.272411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.275 [2024-11-26 18:26:43.272428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.275 [2024-11-26 18:26:43.277100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.275 [2024-11-26 18:26:43.277130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.275 [2024-11-26 18:26:43.277147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.282007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.282039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.282055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.286724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.286754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.286771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.291483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.291513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.291530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.296221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.296250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.296266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.301291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.301331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.301348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.306126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.306157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.306175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.311380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.311411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.311433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.316124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.316155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.316172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.320697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.320727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.320744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.325249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.325294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.325322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.329714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.534 [2024-11-26 18:26:43.329757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.534 [2024-11-26 18:26:43.329774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.534 [2024-11-26 18:26:43.334210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.334240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.334256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.338737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.338782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.338798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.343352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.343396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.343413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.348012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.348041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.348057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.352525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.352561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.352579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.357093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.357124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.357140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.361633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.361663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.361679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.366918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.366948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.366965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.373733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.373763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.373779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.381269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.381324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.381342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.388253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.388300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.396160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.396207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.396224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.402457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.402489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.402506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.407037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.407068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.407085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.411932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.411962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.411979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.416689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.416719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.416736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.422216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.422248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.422265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.427313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.427344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.427361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.431041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.431072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.431089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.436035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.436065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.436081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.442128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.442159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.442193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.449811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.449858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.449880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.456686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.456717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.456750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.463174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.463220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.463236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.468769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.468801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.468818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.474118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.474149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.474166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.479468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.479499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.535 [2024-11-26 18:26:43.479516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.535 [2024-11-26 18:26:43.484446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.535 [2024-11-26 18:26:43.484477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.484495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.489508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.489538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.489555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.494626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.494657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.494674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.497649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.497680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.497697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.501882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.501911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.501943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.506624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.506669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.506685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.511378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.511408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.511425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.515885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.515915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.515933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.520511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.520555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.520571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.526034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.526079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.526096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.532939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.533002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.536 [2024-11-26 18:26:43.540368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.536 [2024-11-26 18:26:43.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.536 [2024-11-26 18:26:43.540423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.546517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.546550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.546567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.551973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.552003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.552021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.558150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.558195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.558212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.566023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.566053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.566086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.572663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.572692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.572725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.578343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.578373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.578391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.584104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.584134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.584150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.590162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.590194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.590211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.597840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.597878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.597896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.605157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.605189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.605206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.611178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.611222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.795 [2024-11-26 18:26:43.611240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.795 [2024-11-26 18:26:43.617261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.795 [2024-11-26 18:26:43.617315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.617334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.622644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.622674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.622690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.626339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.626370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.626388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.631569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.631600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.631617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.639175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.639205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.639236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.644967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.644999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.645016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.650616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.650646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.650663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.656257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.656311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.656331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.661984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.662014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.662031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.667913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.667943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.667976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.675666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.675698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.675715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.682944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.682990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.683007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.690959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.690992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.691024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.699117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.699164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.699181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.706773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.706817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.706839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.712593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.712638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.712654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.717534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.717564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.717596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.722743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.722788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.722804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.728345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.728376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.728393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.733707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.733739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.733756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.739438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.739470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.739488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.746134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.746166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.746183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.751461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.751492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.751509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.756753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.756791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.756809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.762071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.762103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.762121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.766813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.766844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.766861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.769443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.769472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.796 [2024-11-26 18:26:43.769488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.796 [2024-11-26 18:26:43.773482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.796 [2024-11-26 18:26:43.773525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.797 [2024-11-26 18:26:43.773540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.797 [2024-11-26 18:26:43.778081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.797 [2024-11-26 18:26:43.778109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.797 [2024-11-26 18:26:43.778124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.797 [2024-11-26 18:26:43.782704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.797 [2024-11-26 18:26:43.782733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.797 [2024-11-26 18:26:43.782748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.797 [2024-11-26 18:26:43.787361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.797 [2024-11-26 18:26:43.787390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.797 [2024-11-26 18:26:43.787422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:55.797 [2024-11-26 18:26:43.791844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.797 [2024-11-26 18:26:43.791873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.797 [2024-11-26 18:26:43.791896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:55.797 [2024-11-26 18:26:43.795782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.797 [2024-11-26 18:26:43.795824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.797 [2024-11-26 18:26:43.795842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:55.797 [2024-11-26 18:26:43.798747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.797 [2024-11-26 18:26:43.798777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.797 [2024-11-26 18:26:43.798794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:55.797 [2024-11-26 18:26:43.803807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:55.797 [2024-11-26 18:26:43.803838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.797 [2024-11-26 18:26:43.803856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.811163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.811193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.811225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.817681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.817727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.817745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.825234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.825265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.825298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.832882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.832914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.832932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.840205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.840236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.840253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.847652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.847689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.847707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.855840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.855870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.855903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.861792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.861837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.861855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.866478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.866508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.866525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.871110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.055 [2024-11-26 18:26:43.871140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.055 [2024-11-26 18:26:43.871157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:56.055 [2024-11-26 18:26:43.875688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.056 [2024-11-26 18:26:43.875719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.056 [2024-11-26 18:26:43.875736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:56.056 5583.50 IOPS, 697.94 MiB/s [2024-11-26T17:26:44.067Z] [2024-11-26 18:26:43.882087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b52dc0) 00:30:56.056 [2024-11-26 18:26:43.882117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.056 [2024-11-26 18:26:43.882150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:56.056 00:30:56.056 Latency(us) 00:30:56.056 [2024-11-26T17:26:44.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.056 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:56.056 nvme0n1 : 2.00 5585.74 698.22 0.00 0.00 2859.22 731.21 11068.30 00:30:56.056 [2024-11-26T17:26:44.067Z] =================================================================================================================== 00:30:56.056 [2024-11-26T17:26:44.067Z] Total : 5585.74 698.22 0.00 0.00 2859.22 731.21 11068.30 00:30:56.056 { 00:30:56.056 "results": [ 00:30:56.056 { 00:30:56.056 "job": "nvme0n1", 00:30:56.056 "core_mask": "0x2", 00:30:56.056 "workload": "randread", 00:30:56.056 "status": "finished", 00:30:56.056 "queue_depth": 16, 00:30:56.056 "io_size": 131072, 00:30:56.056 "runtime": 2.004388, 00:30:56.056 "iops": 5585.744875742621, 00:30:56.056 "mibps": 698.2181094678276, 00:30:56.056 "io_failed": 0, 00:30:56.056 "io_timeout": 0, 00:30:56.056 "avg_latency_us": 2859.218211795218, 00:30:56.056 "min_latency_us": 731.2118518518519, 00:30:56.056 "max_latency_us": 11068.302222222223 00:30:56.056 } 00:30:56.056 ], 00:30:56.056 "core_count": 1 00:30:56.056 } 00:30:56.056 18:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:56.056 18:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:56.056 18:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:56.056 | .driver_specific 00:30:56.056 | .nvme_error 00:30:56.056 | .status_code 00:30:56.056 | .command_transient_transport_error' 00:30:56.056 18:26:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 361 > 0 )) 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 716157 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 716157 ']' 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 716157 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 716157 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 716157' 00:30:56.314 killing process with pid 716157 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 716157 00:30:56.314 Received shutdown signal, test time was about 2.000000 seconds 00:30:56.314 00:30:56.314 Latency(us) 00:30:56.314 [2024-11-26T17:26:44.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.314 [2024-11-26T17:26:44.325Z] =================================================================================================================== 00:30:56.314 [2024-11-26T17:26:44.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:56.314 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 716157 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=716573 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 716573 /var/tmp/bperf.sock 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 716573 ']' 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:56.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.571 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:56.571 [2024-11-26 18:26:44.476727] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:30:56.572 [2024-11-26 18:26:44.476809] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716573 ] 00:30:56.572 [2024-11-26 18:26:44.542632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.829 [2024-11-26 18:26:44.601954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.829 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.829 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:56.829 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:56.829 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:57.087 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:57.087 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.087 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:57.087 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.087 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:57.087 18:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:57.653 nvme0n1 00:30:57.653 18:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:57.653 18:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.653 18:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:57.653 18:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.653 18:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:57.653 18:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:57.653 Running I/O for 2 seconds... 00:30:57.653 [2024-11-26 18:26:45.505533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.653 [2024-11-26 18:26:45.505915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.653 [2024-11-26 18:26:45.505952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.653 [2024-11-26 18:26:45.519861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.653 [2024-11-26 18:26:45.520198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.653 [2024-11-26 18:26:45.520229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.653 [2024-11-26 18:26:45.534051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.653 [2024-11-26 18:26:45.534395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.653 [2024-11-26 18:26:45.534425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.653 [2024-11-26 18:26:45.548433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.653 [2024-11-26 18:26:45.548738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.653 [2024-11-26 18:26:45.548767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.653 [2024-11-26 18:26:45.562680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.654 [2024-11-26 18:26:45.562922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.654 [2024-11-26 18:26:45.562966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.654 [2024-11-26 18:26:45.576767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.654 [2024-11-26 18:26:45.577005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.654 [2024-11-26 18:26:45.577049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.654 [2024-11-26 18:26:45.590749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.654 [2024-11-26 18:26:45.590976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.654 [2024-11-26 18:26:45.591004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.654 [2024-11-26 18:26:45.604708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.654 [2024-11-26 18:26:45.604968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.654 [2024-11-26 18:26:45.604997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.654 [2024-11-26 18:26:45.618723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.654 [2024-11-26 18:26:45.618961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.654 [2024-11-26 18:26:45.619003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.654 [2024-11-26 18:26:45.632922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.654 [2024-11-26 18:26:45.633240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.654 [2024-11-26 18:26:45.633267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.654 [2024-11-26 18:26:45.646996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.654 [2024-11-26 18:26:45.647274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.654 [2024-11-26 18:26:45.647323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.654 [2024-11-26 18:26:45.660978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.654 [2024-11-26 18:26:45.661204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.654 [2024-11-26 18:26:45.661232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.912 [2024-11-26 18:26:45.674722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.912 [2024-11-26 18:26:45.675148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.912 [2024-11-26 18:26:45.675191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.912 [2024-11-26 18:26:45.688696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.912 [2024-11-26 18:26:45.688988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.912 [2024-11-26 18:26:45.689032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.912 [2024-11-26 18:26:45.702448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.912 [2024-11-26 18:26:45.702702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.912 [2024-11-26 18:26:45.702744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.912 [2024-11-26 18:26:45.716417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.912 [2024-11-26 18:26:45.716700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.912 [2024-11-26 18:26:45.716728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.912 [2024-11-26 18:26:45.730406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.912 [2024-11-26 18:26:45.730637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.912 [2024-11-26 18:26:45.730665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.912 [2024-11-26 18:26:45.744546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.912 [2024-11-26 18:26:45.744857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.912 [2024-11-26 18:26:45.744900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.912 [2024-11-26 18:26:45.758420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.912 [2024-11-26 18:26:45.758645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.912 [2024-11-26 18:26:45.758679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.771874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.772132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.772160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.785593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.785876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.785905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.799369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.799598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.799626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.812950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.813220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.813249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.826544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.826772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.826799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.840110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.840388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.840416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.853733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.853955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.853983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.867385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.867610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.867638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.881140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.881389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.881417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.894859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.895113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.895141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:57.913 [2024-11-26 18:26:45.908485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:57.913 [2024-11-26 18:26:45.908711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:57.913 [2024-11-26 18:26:45.908739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:45.921993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:45.922219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:45.922247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:45.935574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:45.935800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:45.935829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:45.949149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:45.949382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:45.949410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:45.962920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:45.963171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:45.963199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:45.976405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:45.976636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:45.976663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:45.990042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:45.990328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:45.990356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:46.003614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:46.003893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:46.003921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:46.017218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:46.017454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:46.017483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:46.030441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:46.030735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:46.030763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:46.044234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:46.044484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.171 [2024-11-26 18:26:46.044514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.171 [2024-11-26 18:26:46.057664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.171 [2024-11-26 18:26:46.057894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.057921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.071355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.071610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.071639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.084869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.085152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.085179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.098553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.098832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.098860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.112279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.112511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.112547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.125915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.126142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.126184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.139474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.139728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.139755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.153036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.153261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.153289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.166652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.166917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.166944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.172 [2024-11-26 18:26:46.180188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.172 [2024-11-26 18:26:46.180426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.172 [2024-11-26 18:26:46.180454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.430 [2024-11-26 18:26:46.193625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.430 [2024-11-26 18:26:46.193850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.430 [2024-11-26 18:26:46.193877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.430 [2024-11-26 18:26:46.206905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.430 [2024-11-26 18:26:46.207130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.430 [2024-11-26 18:26:46.207158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.430 [2024-11-26 18:26:46.220373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.430 [2024-11-26 18:26:46.220608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.430 [2024-11-26 18:26:46.220636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.430 [2024-11-26 18:26:46.233839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.430 [2024-11-26 18:26:46.234127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.430 [2024-11-26 18:26:46.234157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.430 [2024-11-26 18:26:46.247330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.430 [2024-11-26 18:26:46.247529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.430 [2024-11-26 18:26:46.247558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.430 [2024-11-26 18:26:46.260612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.430 [2024-11-26 18:26:46.260851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.430 [2024-11-26 18:26:46.260879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.430 [2024-11-26 18:26:46.273939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.274163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.274191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.286856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.287087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.287115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.300095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.300290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.300325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.313749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.313974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.314003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.327155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.327365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.327393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.340747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.341001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.341029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.354288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.354492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.354520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.367961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.368154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.368182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.381559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.381822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.381849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.395154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.395358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.395390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.408552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.408818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.408847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.422243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.422446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.422475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.431 [2024-11-26 18:26:46.435878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.431 [2024-11-26 18:26:46.436104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.431 [2024-11-26 18:26:46.436133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.449156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.449367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.449396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.462745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.462973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.463014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.476385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.476639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.476668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 18538.00 IOPS, 72.41 MiB/s [2024-11-26T17:26:46.700Z] [2024-11-26 18:26:46.490007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.490356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.490385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.503529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.503784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.503812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.517119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.517349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.517376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.530513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.530736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.530764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.543821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.544017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.544045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.557579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.557774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.557802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.571080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.571311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.571340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.584547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.584805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.584833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.689 [2024-11-26 18:26:46.598327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.689 [2024-11-26 18:26:46.598521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.689 [2024-11-26 18:26:46.598550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.690 [2024-11-26 18:26:46.611864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.690 [2024-11-26 18:26:46.612165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.690 [2024-11-26 18:26:46.612194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.690 [2024-11-26 18:26:46.625403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.690 [2024-11-26 18:26:46.625695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.690 [2024-11-26 18:26:46.625725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.690 [2024-11-26 18:26:46.639030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.690 [2024-11-26 18:26:46.639324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.690 [2024-11-26 18:26:46.639353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.690 [2024-11-26 18:26:46.652472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.690 [2024-11-26 18:26:46.652698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.690 [2024-11-26 18:26:46.652726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.690 [2024-11-26 18:26:46.665942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.690 [2024-11-26 18:26:46.666210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.690 [2024-11-26 18:26:46.666238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.690 [2024-11-26 18:26:46.679474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.690 [2024-11-26 18:26:46.679698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.690 [2024-11-26 18:26:46.679726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.690 [2024-11-26 18:26:46.692936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.690 [2024-11-26 18:26:46.693130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.690 [2024-11-26 18:26:46.693158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.706285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.706490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.706519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.719933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.720203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.720232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.733553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.733776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.733804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.747025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.747310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.747339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.760537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.760814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.760842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.774197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.774401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.774430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.788231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.788532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.788560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.801968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.802164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.802191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.815981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.816235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.816283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.829723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.829989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.830034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.843580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.843819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.843845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.857513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.857774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.857817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.871562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.871851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.871894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.885484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.885803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.885846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.899394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.899631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.899657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.913388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.913627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.913654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.927383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.927584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.927626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.941458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.941699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.941743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:58.948 [2024-11-26 18:26:46.955388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:58.948 [2024-11-26 18:26:46.955615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:58.948 [2024-11-26 18:26:46.955642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:46.968983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:46.969190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:46.969217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:46.982887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:46.983206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:46.983233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:46.996790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:46.997034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:46.997077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.010676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.010993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.011036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.024553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.024788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.024814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.038597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.038823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.038851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.051957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.052193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.052219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.066068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.066367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.066396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.080028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.080240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.080267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.094132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.094377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.094405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.108000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.108237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.108263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.122069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.122330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.122358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.136051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.136286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.136335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.150111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.150353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.207 [2024-11-26 18:26:47.150396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.207 [2024-11-26 18:26:47.164097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.207 [2024-11-26 18:26:47.164349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.208 [2024-11-26 18:26:47.164377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.208 [2024-11-26 18:26:47.178022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.208 [2024-11-26 18:26:47.178227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.208 [2024-11-26 18:26:47.178260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.208 [2024-11-26 18:26:47.192019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.208 [2024-11-26 18:26:47.192258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.208 [2024-11-26 18:26:47.192284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.208 [2024-11-26 18:26:47.205984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.208 [2024-11-26 18:26:47.206285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.208 [2024-11-26 18:26:47.206320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.219730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.219924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.219951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.233468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.233718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.233745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.247477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.247685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.247726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.261427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.261663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.261705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.275260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.275469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.275497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.289160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.289404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.289432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.302639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.302905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.302946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.316636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.316857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.316884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.330429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.330675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.330718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.344371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.344566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.344594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.358363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.358663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.358705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.372261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.372465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.372493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.386241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.386542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.386570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.400158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.400362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.400390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.413691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.413978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.414005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.427535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.427782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.466 [2024-11-26 18:26:47.427809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.466 [2024-11-26 18:26:47.441620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.466 [2024-11-26 18:26:47.441950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.467 [2024-11-26 18:26:47.441993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.467 [2024-11-26 18:26:47.455402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.467 [2024-11-26 18:26:47.455652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.467 [2024-11-26 18:26:47.455694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.467 [2024-11-26 18:26:47.469401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.467 [2024-11-26 18:26:47.469636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.467 [2024-11-26 18:26:47.469678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.724 [2024-11-26 18:26:47.483038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.724 [2024-11-26 18:26:47.483230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.724 [2024-11-26 18:26:47.483258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.724 18536.50 IOPS, 72.41 MiB/s [2024-11-26T17:26:47.735Z] [2024-11-26 18:26:47.496536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x861d50) with pdu=0x200016eff3c8 00:30:59.724 [2024-11-26 18:26:47.496776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:59.725 [2024-11-26 18:26:47.496804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:59.725 00:30:59.725 Latency(us) 00:30:59.725 [2024-11-26T17:26:47.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.725 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.725 nvme0n1 : 2.01 18542.08 72.43 0.00 0.00 6887.08 4927.34 14854.83 00:30:59.725 [2024-11-26T17:26:47.736Z] =================================================================================================================== 00:30:59.725 [2024-11-26T17:26:47.736Z] Total : 18542.08 72.43 0.00 0.00 6887.08 4927.34 14854.83 00:30:59.725 { 00:30:59.725 "results": [ 00:30:59.725 { 00:30:59.725 "job": "nvme0n1", 00:30:59.725 "core_mask": "0x2", 00:30:59.725 "workload": "randwrite", 00:30:59.725 "status": "finished", 00:30:59.725 "queue_depth": 128, 00:30:59.725 "io_size": 4096, 00:30:59.725 "runtime": 2.009321, 00:30:59.725 "iops": 18542.08461465341, 00:30:59.725 "mibps": 72.43001802598988, 00:30:59.725 "io_failed": 0, 00:30:59.725 "io_timeout": 0, 00:30:59.725 "avg_latency_us": 6887.08282852141, 00:30:59.725 "min_latency_us": 4927.3362962962965, 00:30:59.725 "max_latency_us": 14854.826666666666 00:30:59.725 } 00:30:59.725 ], 00:30:59.725 "core_count": 1 00:30:59.725 } 00:30:59.725 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:59.725 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:59.725 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:59.725 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:59.725 | .driver_specific 00:30:59.725 | .nvme_error 00:30:59.725 | .status_code 00:30:59.725 | .command_transient_transport_error' 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 716573 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 716573 ']' 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 716573 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 716573 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 716573' 00:30:59.984 killing process with pid 716573 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 716573 00:30:59.984 Received shutdown signal, test time was about 2.000000 seconds 00:30:59.984 00:30:59.984 Latency(us) 00:30:59.984 [2024-11-26T17:26:47.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.984 [2024-11-26T17:26:47.995Z] =================================================================================================================== 00:30:59.984 [2024-11-26T17:26:47.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:59.984 18:26:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 716573 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=716977 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 716977 /var/tmp/bperf.sock 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 716977 ']' 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:00.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.242 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:00.242 [2024-11-26 18:26:48.067854] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:00.242 [2024-11-26 18:26:48.067936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716977 ] 00:31:00.242 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:00.242 Zero copy mechanism will not be used. 00:31:00.242 [2024-11-26 18:26:48.135578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.242 [2024-11-26 18:26:48.193877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.500 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.500 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:31:00.500 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:00.500 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:31:00.758 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:31:00.758 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.758 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:00.758 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.758 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:00.758 18:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:31:01.325 nvme0n1 00:31:01.325 18:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:31:01.325 18:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.325 18:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:01.325 18:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.325 18:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:31:01.325 18:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:01.325 I/O size of 131072 is greater than zero copy threshold (65536). 00:31:01.325 Zero copy mechanism will not be used. 00:31:01.325 Running I/O for 2 seconds... 00:31:01.325 [2024-11-26 18:26:49.177151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.177262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.177301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.183117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.183206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.183260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.188981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.189093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.189139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.194947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.195127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.195155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.201361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.201491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.201519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.207794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.207948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.207976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.213588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.213691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.213733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.219939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.220161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.220190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.226480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.226705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.226734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.232926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.233059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.233088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.239512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.239715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.239744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.245626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.245737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.245765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.252935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.253095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.253124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.259741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.259950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.259978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.267453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.267615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.267643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.273429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.273594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.273622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.279957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.280058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.280086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.286000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.286077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.286104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.291232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.291326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.291354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.296698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.296797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.296825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.301989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.302091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.302119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.307199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.307297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.307336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.312920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.312993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.313035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.318389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.318462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.318490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.323672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.323773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.323799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.328825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.328909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.328937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.325 [2024-11-26 18:26:49.334251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.325 [2024-11-26 18:26:49.334336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.325 [2024-11-26 18:26:49.334365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.340032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.340105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.340138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.346041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.346150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.346179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.351969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.352069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.352095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.357920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.358008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.358035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.363408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.363663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.363692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.368328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.368649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.368678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.373298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.373635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.373665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.378146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.378460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.378490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.383160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.383478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.383508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.388245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.388571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.388600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.393294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.393635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.393664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.398412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.398730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.398759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.403483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.403769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.403798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.408368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.408688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.408717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.413271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.413611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.413640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.583 [2024-11-26 18:26:49.418390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.583 [2024-11-26 18:26:49.418710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.583 [2024-11-26 18:26:49.418739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.423380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.423658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.423687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.428373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.428618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.428648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.433037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.433316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.433346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.437915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.438223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.438253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.443734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.444048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.444078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.448929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.449209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.449238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.454395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.454687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.454716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.460256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.460602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.460631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.466338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.466636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.466665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.473142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.473457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.473488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.480007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.480319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.480354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.486046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.486379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.486409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.491271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.491581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.491610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.496661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.496967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.496997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.502318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.502618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.502647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.507182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.507483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.507512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.512058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.512356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.512386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.517128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.517411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.517441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.522052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.522335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.522364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.526930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.527221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.527250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.531788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.532177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.532221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.536863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.537149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.537178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.541685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.542038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.542067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.546763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.547055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.547084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.551557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.551860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.551889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.556548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.556852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.556881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.561352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.561582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.561610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.565832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.566054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.566082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.570282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.570517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.570545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.575174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.575397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.575426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.579840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.580067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.580095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.584264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.584507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.584535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.584 [2024-11-26 18:26:49.588673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.584 [2024-11-26 18:26:49.588883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.584 [2024-11-26 18:26:49.588911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.593037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.593275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.593311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.597451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.597711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.597739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.601901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.602141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.602169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.606208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.606445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.606479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.610606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.610842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.610870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.614951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.615170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.615198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.619228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.619489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.619517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.623567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.623816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.623843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.627906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.628126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.628154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.632193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.632424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.632453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.636455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.636755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.636784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.640875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.641109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.641137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.843 [2024-11-26 18:26:49.645158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.843 [2024-11-26 18:26:49.645420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.843 [2024-11-26 18:26:49.645449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.649485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.649741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.649770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.653897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.654117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.654145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.658238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.658473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.658501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.662637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.662856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.662884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.667020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.667232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.667261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.671423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.671646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.671674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.675839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.676062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.676090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.680196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.680442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.680471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.685541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.685781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.685811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.690751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.690955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.690983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.695635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.695875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.695904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.700393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.700569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.700598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.705324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.705551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.705580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.710296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.710532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.710561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.715180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.715423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.715452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.720101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.720350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.720378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.725289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.725545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.725579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.730320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.730543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.730571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.734859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.735078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.735106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.740146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.740474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.740502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.745178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.745438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.745467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.749815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.750031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.750060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.754324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.754579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.754607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.759366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.759583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.759611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.764578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.764796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.764824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.769480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.769713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.769742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.844 [2024-11-26 18:26:49.774266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.844 [2024-11-26 18:26:49.774494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.844 [2024-11-26 18:26:49.774523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.779005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.779212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.779240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.783862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.784113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.784141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.789424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.789617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.789645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.794036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.794257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.794286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.798536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.798734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.798762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.802914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.803158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.803187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.807432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.807614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.807642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.812343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.812606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.812634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.817867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.818164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.818192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.822587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.822839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.822867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.827025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.827251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.827279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.831605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.831831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.831860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.836806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.837070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.837099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.842088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.842443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.842472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:01.845 [2024-11-26 18:26:49.847818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:01.845 [2024-11-26 18:26:49.848121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.845 [2024-11-26 18:26:49.848149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.853638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.853880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.853909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.858891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.859186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.859214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.864237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.864544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.864573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.869634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.869878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.869906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.875166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.875480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.875508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.880353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.880667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.880695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.885738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.886012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.886040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.891046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.891356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.896366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.896644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.896672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.901585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.901900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.901934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.906997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.907226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.907255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.912163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.912429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.912458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.917422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.917724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.917752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.922747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.923078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.923107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.928098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.928434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.928463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.933453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.933751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.933780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.939112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.939312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.939341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.944806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.945005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.105 [2024-11-26 18:26:49.945033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.105 [2024-11-26 18:26:49.950089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.105 [2024-11-26 18:26:49.950316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.950345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.955445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.955630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.955658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.960667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.960879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.960907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.966002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.966232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.966260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.971330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.971559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.971587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.976499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.976730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.976759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.981762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.981967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.981994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.987028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.987327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.987355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.992236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.992497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.992526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:49.997460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:49.997690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:49.997719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.002828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.003013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.003043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.008176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.008389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.008418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.013555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.013859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.013890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.018854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.019158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.019190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.024161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.024400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.024431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.029682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.029948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.029978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.035100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.035324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.035354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.040530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.040709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.040746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.046435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.046692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.046722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.051600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.051830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.051859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.056888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.057118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.057147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.062148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.062370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.062408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.067398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.067581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.067609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.072749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.072975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.073005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.106 [2024-11-26 18:26:50.078086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.106 [2024-11-26 18:26:50.078330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.106 [2024-11-26 18:26:50.078359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.107 [2024-11-26 18:26:50.083418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.107 [2024-11-26 18:26:50.083663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.107 [2024-11-26 18:26:50.083691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.107 [2024-11-26 18:26:50.088736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.107 [2024-11-26 18:26:50.088941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.107 [2024-11-26 18:26:50.088969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.107 [2024-11-26 18:26:50.093998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.107 [2024-11-26 18:26:50.094286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.107 [2024-11-26 18:26:50.094323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.107 [2024-11-26 18:26:50.099317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.107 [2024-11-26 18:26:50.099528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.107 [2024-11-26 18:26:50.099555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.107 [2024-11-26 18:26:50.104633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.107 [2024-11-26 18:26:50.104860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.107 [2024-11-26 18:26:50.104889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.107 [2024-11-26 18:26:50.110073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.107 [2024-11-26 18:26:50.110254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.107 [2024-11-26 18:26:50.110282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.366 [2024-11-26 18:26:50.115412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.366 [2024-11-26 18:26:50.115662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.366 [2024-11-26 18:26:50.115697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.366 [2024-11-26 18:26:50.120736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.366 [2024-11-26 18:26:50.120935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.366 [2024-11-26 18:26:50.120963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.366 [2024-11-26 18:26:50.126229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.366 [2024-11-26 18:26:50.126458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.366 [2024-11-26 18:26:50.126486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.366 [2024-11-26 18:26:50.131547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.366 [2024-11-26 18:26:50.131721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.366 [2024-11-26 18:26:50.131749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.366 [2024-11-26 18:26:50.136933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.366 [2024-11-26 18:26:50.137206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.366 [2024-11-26 18:26:50.137234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.366 [2024-11-26 18:26:50.142373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.366 [2024-11-26 18:26:50.142593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.366 [2024-11-26 18:26:50.142622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.366 [2024-11-26 18:26:50.147568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.366 [2024-11-26 18:26:50.147793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.366 [2024-11-26 18:26:50.147820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.152862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.153080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.153109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.158062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.158311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.158340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.163315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.163510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.163539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.168468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.168701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.168729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.173735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.173943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.173971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.367 5931.00 IOPS, 741.38 MiB/s [2024-11-26T17:26:50.378Z] [2024-11-26 18:26:50.180340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.180533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.180567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.185868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.186059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.186089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.192346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.192520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.192549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.198126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.198333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.198362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.203317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.203553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.203582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.208056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.208239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.208267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.213183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.213396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.213424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.218510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.218730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.218758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.223587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.223903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.223932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.228630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.228863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.228892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.233665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.233947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.233976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.238781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.239023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.239052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.244322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.244593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.244622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.250155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.250363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.250392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.254769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.254978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.255007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.259371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.259563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.367 [2024-11-26 18:26:50.259593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.367 [2024-11-26 18:26:50.263936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.367 [2024-11-26 18:26:50.264122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.264151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.268508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.268701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.268730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.273053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.273278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.273314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.277998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.278318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.278347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.283355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.283629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.283658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.288457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.288710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.288739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.294003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.294290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.294328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.299691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.299882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.299911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.304299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.304508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.304537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.308833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.309039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.309067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.313362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.313542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.313576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.318239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.318440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.318469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.322873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.323089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.323118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.327402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.327590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.327619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.332044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.332240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.332269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.336635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.336836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.336865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.341323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.341515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.341543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.345950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.346167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.346196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.350405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.350593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.350621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.354818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.355030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.355059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.359251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.359458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.359486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.363899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.364102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.368 [2024-11-26 18:26:50.364131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.368 [2024-11-26 18:26:50.369092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.368 [2024-11-26 18:26:50.369403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.369 [2024-11-26 18:26:50.369433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.369 [2024-11-26 18:26:50.374176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.369 [2024-11-26 18:26:50.374471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.369 [2024-11-26 18:26:50.374501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.628 [2024-11-26 18:26:50.379447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.628 [2024-11-26 18:26:50.379670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.628 [2024-11-26 18:26:50.379699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.628 [2024-11-26 18:26:50.384741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.628 [2024-11-26 18:26:50.385012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.628 [2024-11-26 18:26:50.385041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.628 [2024-11-26 18:26:50.389806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.628 [2024-11-26 18:26:50.390094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.628 [2024-11-26 18:26:50.390122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.628 [2024-11-26 18:26:50.394913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.628 [2024-11-26 18:26:50.395182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.395211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.399996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.400256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.400285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.405239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.405459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.405489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.410491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.410800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.410829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.415648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.415875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.415904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.420963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.421203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.421240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.426107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.426409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.426438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.431360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.431614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.431643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.436556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.436820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.436849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.442181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.442431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.442466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.447509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.447707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.447736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.453434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.453663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.453691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.458747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.459025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.459054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.463807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.464082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.464111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.468660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.468867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.468896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.473557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.473771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.473800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.478991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.479225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.479253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.484839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.485075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.485105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.489696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.489897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.489925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.494247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.494454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.494483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.498713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.498887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.498916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.503245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.503441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.503470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.507759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.507995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.508022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.512469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.512650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.512679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.516951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.517145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.517174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.521573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.521764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.521793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.629 [2024-11-26 18:26:50.526270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.629 [2024-11-26 18:26:50.526467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.629 [2024-11-26 18:26:50.526496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.530840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.531043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.531072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.535484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.535654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.535683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.540067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.540294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.540330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.544665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.544838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.544866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.549202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.549405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.549434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.553790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.553975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.554004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.558366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.558548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.558577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.563104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.563290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.563328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.567629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.567807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.567841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.572323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.572497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.572526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.576912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.577087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.577115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.581462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.581662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.581691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.586048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.586208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.586237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.590607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.590754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.590783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.595427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.595579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.595608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.600088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.600255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.600283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.604683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.604856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.604884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.609212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.609389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.609418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.613824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.613992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.614020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.618454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.618661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.618689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.622928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.623092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.623121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.627654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.627852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.627880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.630 [2024-11-26 18:26:50.632205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.630 [2024-11-26 18:26:50.632393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.630 [2024-11-26 18:26:50.632421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.636715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.636861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.636890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.641284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.641459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.641487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.645840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.646069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.646098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.650345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.650521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.650549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.654893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.655094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.655122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.659440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.659607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.659635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.663908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.664066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.664095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.668481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.668678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.668706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.672925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.673083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.673112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.677683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.677854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.677882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.682017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.682178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.682207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.686972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.687167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.687201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.692467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.692625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.692654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.697842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.698088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.698116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.703297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.890 [2024-11-26 18:26:50.703566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.890 [2024-11-26 18:26:50.703594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.890 [2024-11-26 18:26:50.709030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.709200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.709229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.713479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.713627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.713657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.717864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.718036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.718064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.722353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.722522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.722550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.727376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.727552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.727581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.732553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.732715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.732749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.736934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.737099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.737129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.741250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.741429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.741458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.745583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.745730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.745759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.749953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.750117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.750145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.754403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.754562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.754590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.758773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.758972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.759001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.763187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.763359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.763388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.767666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.767837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.767865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.772057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.772235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.772263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.776449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.776599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.776627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.780856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.781023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.781052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.785219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.785388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.785416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.789622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.789778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.789807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.794016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.794186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.794214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.798475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.798620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.798647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.803036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.803202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.803231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.807447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.807602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.807630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.811866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.812033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.812061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.816434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.816582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.816611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.820804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.820969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.820998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.825158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.891 [2024-11-26 18:26:50.825353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.891 [2024-11-26 18:26:50.825381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.891 [2024-11-26 18:26:50.829547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.829702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.829731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.833919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.834082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.834110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.838311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.838485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.838513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.842726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.842896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.842923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.847138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.847324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.847357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.851548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.851707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.851735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.855948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.856114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.856142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.860380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.860532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.860561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.864720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.864886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.864915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.869168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.869352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.869381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.873549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.873696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.873725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.877922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.878087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.878116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.882260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.882440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.882468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.886644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.886814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.886842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.891056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.891220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.891248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:02.892 [2024-11-26 18:26:50.895459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:02.892 [2024-11-26 18:26:50.895609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:02.892 [2024-11-26 18:26:50.895637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.150 [2024-11-26 18:26:50.899832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.150 [2024-11-26 18:26:50.899997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.150 [2024-11-26 18:26:50.900025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.150 [2024-11-26 18:26:50.904235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.150 [2024-11-26 18:26:50.904390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.150 [2024-11-26 18:26:50.904418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.150 [2024-11-26 18:26:50.908551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.150 [2024-11-26 18:26:50.908699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.150 [2024-11-26 18:26:50.908727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.150 [2024-11-26 18:26:50.912913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.150 [2024-11-26 18:26:50.913079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.150 [2024-11-26 18:26:50.913108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.150 [2024-11-26 18:26:50.917281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.917441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.917469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.921638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.921787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.921815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.926055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.926206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.926235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.930530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.930679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.930708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.934909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.935076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.935104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.939279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.939458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.939487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.944369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.944530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.944558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.949561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.949710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.949739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.954822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.954991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.955019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.960255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.960398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.960428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.965130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.965285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.965329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.969808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.969966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.969994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.974210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.974369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.974397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.978690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.978839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.978868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.983379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.983516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.983544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.988003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.988175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.988203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.993247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.993427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.993456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:50.998114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:50.998271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:50.998300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.002907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.003066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.003095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.007749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.007979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.008007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.013009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.013172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.013200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.017456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.017605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.017634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.021750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.021900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.021929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.026054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.026216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.026243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.030460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.030612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.030642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.034786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.034948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.034976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.039124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.151 [2024-11-26 18:26:51.039290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.151 [2024-11-26 18:26:51.039329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.151 [2024-11-26 18:26:51.043429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.043580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.043609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.047734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.047877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.047906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.052074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.052227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.052255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.056386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.056537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.056566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.060769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.060933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.060961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.065051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.065213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.065242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.069422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.069569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.069597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.073715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.073886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.073914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.078053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.078215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.078243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.082366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.082524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.082557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.086723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.086885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.086913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.091033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.091196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.091224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.095381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.095542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.095571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.099674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.099839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.099867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.104024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.104190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.104218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.108328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.108481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.108509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.112612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.112766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.112794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.116969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.117137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.117165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.121265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.121454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.121482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.125760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.125922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.125950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.130095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.130269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.130297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.134436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.134592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.134620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.138820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.138972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.139000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.143094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.143260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.143288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.147408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.147562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.147590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.151720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.151860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.151888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.152 [2024-11-26 18:26:51.156014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.152 [2024-11-26 18:26:51.156176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.152 [2024-11-26 18:26:51.156205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.410 [2024-11-26 18:26:51.160385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.410 [2024-11-26 18:26:51.160534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.410 [2024-11-26 18:26:51.160562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.410 [2024-11-26 18:26:51.164731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.410 [2024-11-26 18:26:51.164885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.410 [2024-11-26 18:26:51.164914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:03.410 [2024-11-26 18:26:51.169080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.410 [2024-11-26 18:26:51.169241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.410 [2024-11-26 18:26:51.169270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:03.410 [2024-11-26 18:26:51.173437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.410 [2024-11-26 18:26:51.173587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.410 [2024-11-26 18:26:51.173616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:03.410 [2024-11-26 18:26:51.177789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x862090) with pdu=0x200016eff3c8 00:31:03.410 [2024-11-26 18:26:51.179548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:03.410 [2024-11-26 18:26:51.179578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:03.410 6277.50 IOPS, 784.69 MiB/s 00:31:03.410 Latency(us) 00:31:03.410 [2024-11-26T17:26:51.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.410 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:31:03.410 nvme0n1 : 2.00 6275.03 784.38 0.00 0.00 2542.94 1735.49 7475.96 00:31:03.410 [2024-11-26T17:26:51.421Z] =================================================================================================================== 00:31:03.410 [2024-11-26T17:26:51.421Z] Total : 6275.03 784.38 0.00 0.00 2542.94 1735.49 7475.96 00:31:03.410 { 00:31:03.410 "results": [ 00:31:03.410 { 00:31:03.410 "job": "nvme0n1", 00:31:03.410 "core_mask": "0x2", 00:31:03.410 "workload": "randwrite", 00:31:03.410 "status": "finished", 00:31:03.410 "queue_depth": 16, 00:31:03.410 "io_size": 131072, 00:31:03.410 "runtime": 2.004135, 00:31:03.411 "iops": 6275.026382953244, 00:31:03.411 "mibps": 784.3782978691555, 00:31:03.411 "io_failed": 0, 00:31:03.411 "io_timeout": 0, 00:31:03.411 "avg_latency_us": 2542.9449137687307, 00:31:03.411 "min_latency_us": 1735.4903703703703, 00:31:03.411 "max_latency_us": 7475.958518518519 00:31:03.411 } 00:31:03.411 ], 00:31:03.411 "core_count": 1 00:31:03.411 } 00:31:03.411 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:31:03.411 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:31:03.411 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:31:03.411 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:31:03.411 | .driver_specific 00:31:03.411 | .nvme_error 00:31:03.411 | .status_code 00:31:03.411 | .command_transient_transport_error' 00:31:03.668 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 406 > 0 )) 00:31:03.668 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 716977 00:31:03.668 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 716977 ']' 00:31:03.668 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 716977 00:31:03.668 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:03.669 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.669 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 716977 00:31:03.669 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:03.669 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:03.669 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 716977' 00:31:03.669 killing process with pid 716977 00:31:03.669 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 716977 00:31:03.669 Received shutdown signal, test time was about 2.000000 seconds 00:31:03.669 00:31:03.669 Latency(us) 00:31:03.669 [2024-11-26T17:26:51.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.669 [2024-11-26T17:26:51.680Z] =================================================================================================================== 00:31:03.669 [2024-11-26T17:26:51.680Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:03.669 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 716977 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 715608 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 715608 ']' 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 715608 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 715608 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 715608' 00:31:03.928 killing process with pid 715608 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 715608 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 715608 00:31:03.928 00:31:03.928 real 0m15.474s 00:31:03.928 user 0m31.024s 00:31:03.928 sys 0m4.345s 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.928 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:31:03.928 ************************************ 00:31:03.928 END TEST nvmf_digest_error 00:31:03.928 ************************************ 00:31:04.188 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:31:04.188 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:31:04.188 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.188 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:31:04.188 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.188 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:31:04.188 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.188 18:26:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.188 rmmod nvme_tcp 00:31:04.188 rmmod nvme_fabrics 00:31:04.188 rmmod nvme_keyring 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 715608 ']' 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 715608 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 715608 ']' 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 715608 00:31:04.188 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (715608) - No such process 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 715608 is not found' 00:31:04.188 Process with pid 715608 is not found 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.188 18:26:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.093 00:31:06.093 real 0m35.874s 00:31:06.093 user 1m3.425s 00:31:06.093 sys 0m10.375s 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:31:06.093 ************************************ 00:31:06.093 END TEST nvmf_digest 00:31:06.093 ************************************ 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.093 18:26:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.352 ************************************ 00:31:06.352 START TEST nvmf_bdevperf 00:31:06.352 ************************************ 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:31:06.352 * Looking for test storage... 00:31:06.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.352 --rc genhtml_branch_coverage=1 00:31:06.352 --rc genhtml_function_coverage=1 00:31:06.352 --rc genhtml_legend=1 00:31:06.352 --rc geninfo_all_blocks=1 00:31:06.352 --rc geninfo_unexecuted_blocks=1 00:31:06.352 00:31:06.352 ' 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.352 --rc genhtml_branch_coverage=1 00:31:06.352 --rc genhtml_function_coverage=1 00:31:06.352 --rc genhtml_legend=1 00:31:06.352 --rc geninfo_all_blocks=1 00:31:06.352 --rc geninfo_unexecuted_blocks=1 00:31:06.352 00:31:06.352 ' 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.352 --rc genhtml_branch_coverage=1 00:31:06.352 --rc genhtml_function_coverage=1 00:31:06.352 --rc genhtml_legend=1 00:31:06.352 --rc geninfo_all_blocks=1 00:31:06.352 --rc geninfo_unexecuted_blocks=1 00:31:06.352 00:31:06.352 ' 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:06.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.352 --rc genhtml_branch_coverage=1 00:31:06.352 --rc genhtml_function_coverage=1 00:31:06.352 --rc genhtml_legend=1 00:31:06.352 --rc geninfo_all_blocks=1 00:31:06.352 --rc geninfo_unexecuted_blocks=1 00:31:06.352 00:31:06.352 ' 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:31:06.352 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:06.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.353 18:26:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:08.885 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:08.885 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.885 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:08.886 Found net devices under 0000:09:00.0: cvl_0_0 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:08.886 Found net devices under 0000:09:00.1: cvl_0_1 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:31:08.886 00:31:08.886 --- 10.0.0.2 ping statistics --- 00:31:08.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.886 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:31:08.886 00:31:08.886 --- 10.0.0.1 ping statistics --- 00:31:08.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.886 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=719455 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 719455 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 719455 ']' 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.886 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:08.886 [2024-11-26 18:26:56.724495] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:08.886 [2024-11-26 18:26:56.724579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.886 [2024-11-26 18:26:56.798174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:08.886 [2024-11-26 18:26:56.856003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.886 [2024-11-26 18:26:56.856052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.886 [2024-11-26 18:26:56.856079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.886 [2024-11-26 18:26:56.856089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.886 [2024-11-26 18:26:56.856099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.886 [2024-11-26 18:26:56.857585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.886 [2024-11-26 18:26:56.857639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.886 [2024-11-26 18:26:56.857635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:09.145 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.145 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:09.145 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:09.145 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:09.145 18:26:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.145 [2024-11-26 18:26:57.006475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.145 Malloc0 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:09.145 [2024-11-26 18:26:57.068942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:09.145 { 00:31:09.145 "params": { 00:31:09.145 "name": "Nvme$subsystem", 00:31:09.145 "trtype": "$TEST_TRANSPORT", 00:31:09.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.145 "adrfam": "ipv4", 00:31:09.145 "trsvcid": "$NVMF_PORT", 00:31:09.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.145 "hdgst": ${hdgst:-false}, 00:31:09.145 "ddgst": ${ddgst:-false} 00:31:09.145 }, 00:31:09.145 "method": "bdev_nvme_attach_controller" 00:31:09.145 } 00:31:09.145 EOF 00:31:09.145 )") 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:09.145 18:26:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:09.145 "params": { 00:31:09.145 "name": "Nvme1", 00:31:09.145 "trtype": "tcp", 00:31:09.145 "traddr": "10.0.0.2", 00:31:09.145 "adrfam": "ipv4", 00:31:09.145 "trsvcid": "4420", 00:31:09.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:09.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:09.145 "hdgst": false, 00:31:09.145 "ddgst": false 00:31:09.145 }, 00:31:09.145 "method": "bdev_nvme_attach_controller" 00:31:09.145 }' 00:31:09.145 [2024-11-26 18:26:57.117486] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:09.145 [2024-11-26 18:26:57.117568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719486 ] 00:31:09.403 [2024-11-26 18:26:57.185724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.403 [2024-11-26 18:26:57.245405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.659 Running I/O for 1 seconds... 00:31:10.592 8471.00 IOPS, 33.09 MiB/s 00:31:10.592 Latency(us) 00:31:10.592 [2024-11-26T17:26:58.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.592 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:10.592 Verification LBA range: start 0x0 length 0x4000 00:31:10.592 Nvme1n1 : 1.01 8501.68 33.21 0.00 0.00 14996.60 3155.44 12718.84 00:31:10.592 [2024-11-26T17:26:58.604Z] =================================================================================================================== 00:31:10.593 [2024-11-26T17:26:58.604Z] Total : 8501.68 33.21 0.00 0.00 14996.60 3155.44 12718.84 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=719671 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:10.851 { 00:31:10.851 "params": { 00:31:10.851 "name": "Nvme$subsystem", 00:31:10.851 "trtype": "$TEST_TRANSPORT", 00:31:10.851 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.851 "adrfam": "ipv4", 00:31:10.851 "trsvcid": "$NVMF_PORT", 00:31:10.851 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.851 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.851 "hdgst": ${hdgst:-false}, 00:31:10.851 "ddgst": ${ddgst:-false} 00:31:10.851 }, 00:31:10.851 "method": "bdev_nvme_attach_controller" 00:31:10.851 } 00:31:10.851 EOF 00:31:10.851 )") 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:31:10.851 18:26:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:10.851 "params": { 00:31:10.851 "name": "Nvme1", 00:31:10.851 "trtype": "tcp", 00:31:10.851 "traddr": "10.0.0.2", 00:31:10.851 "adrfam": "ipv4", 00:31:10.851 "trsvcid": "4420", 00:31:10.851 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:10.851 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:10.851 "hdgst": false, 00:31:10.851 "ddgst": false 00:31:10.851 }, 00:31:10.851 "method": "bdev_nvme_attach_controller" 00:31:10.851 }' 00:31:10.851 [2024-11-26 18:26:58.843524] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:10.851 [2024-11-26 18:26:58.843633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid719671 ] 00:31:11.109 [2024-11-26 18:26:58.914380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.109 [2024-11-26 18:26:58.973714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.367 Running I/O for 15 seconds... 00:31:13.233 7939.00 IOPS, 31.01 MiB/s [2024-11-26T17:27:01.812Z] 8068.00 IOPS, 31.52 MiB/s [2024-11-26T17:27:01.812Z] 18:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 719455 00:31:13.801 18:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:31:13.801 [2024-11-26 18:27:01.806312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.801 [2024-11-26 18:27:01.806360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.801 [2024-11-26 18:27:01.806390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.801 [2024-11-26 18:27:01.806409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.801 [2024-11-26 18:27:01.806428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.801 [2024-11-26 18:27:01.806446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.801 [2024-11-26 18:27:01.806463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.806859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.806904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.806937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.806969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.806986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:13.802 [2024-11-26 18:27:01.807452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.802 [2024-11-26 18:27:01.807832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.802 [2024-11-26 18:27:01.807846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.807860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.807874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.807888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.807918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.807931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.807945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.807974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.807992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.808960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.808988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.809002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.809015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.809050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.803 [2024-11-26 18:27:01.809064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.803 [2024-11-26 18:27:01.809078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.804 [2024-11-26 18:27:01.809676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.804 [2024-11-26 18:27:01.809691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.809977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.809992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.810020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:14.064 [2024-11-26 18:27:01.810062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.810093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.810121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.810151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.810181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.810223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.064 [2024-11-26 18:27:01.810251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.064 [2024-11-26 18:27:01.810265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:14.065 [2024-11-26 18:27:01.810555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e2880 is same with the state(6) to be set 00:31:14.065 [2024-11-26 18:27:01.810601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:14.065 [2024-11-26 18:27:01.810612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:14.065 [2024-11-26 18:27:01.810623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37032 len:8 PRP1 0x0 PRP2 0x0 00:31:14.065 [2024-11-26 18:27:01.810635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.065 [2024-11-26 18:27:01.810822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.065 [2024-11-26 18:27:01.810849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.065 [2024-11-26 18:27:01.810890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.065 [2024-11-26 18:27:01.810916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.065 [2024-11-26 18:27:01.810928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.065 [2024-11-26 18:27:01.814070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.065 [2024-11-26 18:27:01.814107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.065 [2024-11-26 18:27:01.814995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-11-26 18:27:01.815025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-11-26 18:27:01.815042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.065 [2024-11-26 18:27:01.815272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.065 [2024-11-26 18:27:01.815522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.065 [2024-11-26 18:27:01.815542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.065 [2024-11-26 18:27:01.815557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.065 [2024-11-26 18:27:01.815572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.065 [2024-11-26 18:27:01.827754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.065 [2024-11-26 18:27:01.828169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-11-26 18:27:01.828199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-11-26 18:27:01.828215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.065 [2024-11-26 18:27:01.828469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.065 [2024-11-26 18:27:01.828706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.065 [2024-11-26 18:27:01.828726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.065 [2024-11-26 18:27:01.828739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.065 [2024-11-26 18:27:01.828750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.065 [2024-11-26 18:27:01.841081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.065 [2024-11-26 18:27:01.841421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-11-26 18:27:01.841450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-11-26 18:27:01.841467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.065 [2024-11-26 18:27:01.841696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.065 [2024-11-26 18:27:01.841887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.065 [2024-11-26 18:27:01.841907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.065 [2024-11-26 18:27:01.841920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.065 [2024-11-26 18:27:01.841931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.065 [2024-11-26 18:27:01.854300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.065 [2024-11-26 18:27:01.854644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-11-26 18:27:01.854677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-11-26 18:27:01.854693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.065 [2024-11-26 18:27:01.854924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.065 [2024-11-26 18:27:01.855115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.065 [2024-11-26 18:27:01.855135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.065 [2024-11-26 18:27:01.855147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.065 [2024-11-26 18:27:01.855159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.065 [2024-11-26 18:27:01.867559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.065 [2024-11-26 18:27:01.867995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.065 [2024-11-26 18:27:01.868040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.065 [2024-11-26 18:27:01.868055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.065 [2024-11-26 18:27:01.868289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.065 [2024-11-26 18:27:01.868515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.868535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.868549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.868561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.880713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.881087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.881126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-11-26 18:27:01.881159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.066 [2024-11-26 18:27:01.881407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.066 [2024-11-26 18:27:01.881623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.881643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.881671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.881683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.893826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.894119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.894173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-11-26 18:27:01.894207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.066 [2024-11-26 18:27:01.894471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.066 [2024-11-26 18:27:01.894697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.894719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.894732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.894744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.907046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.907399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.907430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-11-26 18:27:01.907446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.066 [2024-11-26 18:27:01.907683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.066 [2024-11-26 18:27:01.907894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.907914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.907927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.907939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.920181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.920534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.920563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-11-26 18:27:01.920579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.066 [2024-11-26 18:27:01.920795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.066 [2024-11-26 18:27:01.920998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.921017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.921030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.921042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.933349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.933753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.933782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-11-26 18:27:01.933797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.066 [2024-11-26 18:27:01.934048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.066 [2024-11-26 18:27:01.934271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.934317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.934338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.934352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.946494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.946919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.946949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-11-26 18:27:01.946965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.066 [2024-11-26 18:27:01.947207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.066 [2024-11-26 18:27:01.947465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.947488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.947502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.947515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.959578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.959939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.959968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-11-26 18:27:01.959984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.066 [2024-11-26 18:27:01.960223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.066 [2024-11-26 18:27:01.960480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.960503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.960517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.960530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.972677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.972968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.972994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.066 [2024-11-26 18:27:01.973009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.066 [2024-11-26 18:27:01.973206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.066 [2024-11-26 18:27:01.973439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.066 [2024-11-26 18:27:01.973460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.066 [2024-11-26 18:27:01.973474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.066 [2024-11-26 18:27:01.973486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.066 [2024-11-26 18:27:01.985693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.066 [2024-11-26 18:27:01.986036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.066 [2024-11-26 18:27:01.986065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.067 [2024-11-26 18:27:01.986081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.067 [2024-11-26 18:27:01.986331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.067 [2024-11-26 18:27:01.986547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.067 [2024-11-26 18:27:01.986568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.067 [2024-11-26 18:27:01.986582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.067 [2024-11-26 18:27:01.986608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.067 [2024-11-26 18:27:01.998913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.067 [2024-11-26 18:27:01.999284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-11-26 18:27:01.999318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.067 [2024-11-26 18:27:01.999349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.067 [2024-11-26 18:27:01.999575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.067 [2024-11-26 18:27:01.999801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.067 [2024-11-26 18:27:01.999822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.067 [2024-11-26 18:27:01.999834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.067 [2024-11-26 18:27:01.999846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.067 [2024-11-26 18:27:02.012043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.067 [2024-11-26 18:27:02.012360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-11-26 18:27:02.012389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.067 [2024-11-26 18:27:02.012405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.067 [2024-11-26 18:27:02.012623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.067 [2024-11-26 18:27:02.012829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.067 [2024-11-26 18:27:02.012850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.067 [2024-11-26 18:27:02.012863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.067 [2024-11-26 18:27:02.012874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.067 [2024-11-26 18:27:02.025079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.067 [2024-11-26 18:27:02.025453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-11-26 18:27:02.025486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.067 [2024-11-26 18:27:02.025503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.067 [2024-11-26 18:27:02.025722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.067 [2024-11-26 18:27:02.025927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.067 [2024-11-26 18:27:02.025947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.067 [2024-11-26 18:27:02.025960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.067 [2024-11-26 18:27:02.025972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.067 [2024-11-26 18:27:02.038509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.067 [2024-11-26 18:27:02.038856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-11-26 18:27:02.038885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.067 [2024-11-26 18:27:02.038902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.067 [2024-11-26 18:27:02.039140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.067 [2024-11-26 18:27:02.039361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.067 [2024-11-26 18:27:02.039396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.067 [2024-11-26 18:27:02.039411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.067 [2024-11-26 18:27:02.039423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.067 [2024-11-26 18:27:02.051652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.067 [2024-11-26 18:27:02.051997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-11-26 18:27:02.052026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.067 [2024-11-26 18:27:02.052042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.067 [2024-11-26 18:27:02.052278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.067 [2024-11-26 18:27:02.052519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.067 [2024-11-26 18:27:02.052541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.067 [2024-11-26 18:27:02.052555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.067 [2024-11-26 18:27:02.052567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.067 [2024-11-26 18:27:02.064851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.067 [2024-11-26 18:27:02.065219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.067 [2024-11-26 18:27:02.065257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.067 [2024-11-26 18:27:02.065282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.067 [2024-11-26 18:27:02.065597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.067 [2024-11-26 18:27:02.065878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.067 [2024-11-26 18:27:02.065906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.067 [2024-11-26 18:27:02.065926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.067 [2024-11-26 18:27:02.065947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.327 [2024-11-26 18:27:02.079314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.327 [2024-11-26 18:27:02.079805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-11-26 18:27:02.079847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.327 [2024-11-26 18:27:02.079876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.327 [2024-11-26 18:27:02.080127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.327 [2024-11-26 18:27:02.080362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.327 [2024-11-26 18:27:02.080387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.327 [2024-11-26 18:27:02.080402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.327 [2024-11-26 18:27:02.080416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.327 [2024-11-26 18:27:02.093731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.327 [2024-11-26 18:27:02.094132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-11-26 18:27:02.094167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.327 [2024-11-26 18:27:02.094185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.327 [2024-11-26 18:27:02.094465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.327 [2024-11-26 18:27:02.094709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.327 [2024-11-26 18:27:02.094746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.327 [2024-11-26 18:27:02.094761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.327 [2024-11-26 18:27:02.094773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.327 [2024-11-26 18:27:02.107016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.327 [2024-11-26 18:27:02.107430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-11-26 18:27:02.107461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.327 [2024-11-26 18:27:02.107478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.327 [2024-11-26 18:27:02.107714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.327 [2024-11-26 18:27:02.107919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.327 [2024-11-26 18:27:02.107940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.327 [2024-11-26 18:27:02.107957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.327 [2024-11-26 18:27:02.107970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.327 [2024-11-26 18:27:02.120091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.327 [2024-11-26 18:27:02.120473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.327 [2024-11-26 18:27:02.120503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.120519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.120737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.120943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.120963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.120976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.120987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.133273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.133657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.133685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.133701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.133918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.134143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.134163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.134176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.134188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.146604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.147005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.147059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.147075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.147332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.147551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.147571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.147586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.147598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.159841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.160187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.160215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.160231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.160476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.160686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.160705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.160718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.160729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.173109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.173460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.173489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.173505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.173741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.173952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.173971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.173983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.173995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.186319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.186646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.186672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.186687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.186898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.187104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.187124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.187136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.187148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.199559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.199927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.199957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.199978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.200215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.200446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.200466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.200478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.200490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.212869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 6967.67 IOPS, 27.22 MiB/s [2024-11-26T17:27:02.339Z] [2024-11-26 18:27:02.214693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.214735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.214750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.214966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.215172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.215191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.215204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.215215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.226039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.226456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.226485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.226501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.226737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.226943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.226962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.226975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.226986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.239186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.239584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.328 [2024-11-26 18:27:02.239612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.328 [2024-11-26 18:27:02.239628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.328 [2024-11-26 18:27:02.239859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.328 [2024-11-26 18:27:02.240065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.328 [2024-11-26 18:27:02.240085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.328 [2024-11-26 18:27:02.240098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.328 [2024-11-26 18:27:02.240110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.328 [2024-11-26 18:27:02.252483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.328 [2024-11-26 18:27:02.252864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.329 [2024-11-26 18:27:02.252893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.329 [2024-11-26 18:27:02.252909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.329 [2024-11-26 18:27:02.253144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.329 [2024-11-26 18:27:02.253379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.329 [2024-11-26 18:27:02.253400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.329 [2024-11-26 18:27:02.253413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.329 [2024-11-26 18:27:02.253426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.329 [2024-11-26 18:27:02.265667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.329 [2024-11-26 18:27:02.266036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.329 [2024-11-26 18:27:02.266089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.329 [2024-11-26 18:27:02.266105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.329 [2024-11-26 18:27:02.266364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.329 [2024-11-26 18:27:02.266559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.329 [2024-11-26 18:27:02.266579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.329 [2024-11-26 18:27:02.266591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.329 [2024-11-26 18:27:02.266603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.329 [2024-11-26 18:27:02.279162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.329 [2024-11-26 18:27:02.279537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.329 [2024-11-26 18:27:02.279587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.329 [2024-11-26 18:27:02.279604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.329 [2024-11-26 18:27:02.279870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.329 [2024-11-26 18:27:02.280059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.329 [2024-11-26 18:27:02.280078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.329 [2024-11-26 18:27:02.280098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.329 [2024-11-26 18:27:02.280110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.329 [2024-11-26 18:27:02.292696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.329 [2024-11-26 18:27:02.293109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.329 [2024-11-26 18:27:02.293137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.329 [2024-11-26 18:27:02.293152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.329 [2024-11-26 18:27:02.293401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.329 [2024-11-26 18:27:02.293638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.329 [2024-11-26 18:27:02.293673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.329 [2024-11-26 18:27:02.293686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.329 [2024-11-26 18:27:02.293698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.329 [2024-11-26 18:27:02.306031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.329 [2024-11-26 18:27:02.306395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.329 [2024-11-26 18:27:02.306425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.329 [2024-11-26 18:27:02.306442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.329 [2024-11-26 18:27:02.306683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.329 [2024-11-26 18:27:02.306888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.329 [2024-11-26 18:27:02.306908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.329 [2024-11-26 18:27:02.306921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.329 [2024-11-26 18:27:02.306934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.329 [2024-11-26 18:27:02.319415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.329 [2024-11-26 18:27:02.319833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.329 [2024-11-26 18:27:02.319862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.329 [2024-11-26 18:27:02.319878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.329 [2024-11-26 18:27:02.320104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.329 [2024-11-26 18:27:02.320346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.329 [2024-11-26 18:27:02.320368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.329 [2024-11-26 18:27:02.320381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.329 [2024-11-26 18:27:02.320394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.329 [2024-11-26 18:27:02.333260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.329 [2024-11-26 18:27:02.333716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.329 [2024-11-26 18:27:02.333750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.329 [2024-11-26 18:27:02.333767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.329 [2024-11-26 18:27:02.333999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.329 [2024-11-26 18:27:02.334238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.329 [2024-11-26 18:27:02.334275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.329 [2024-11-26 18:27:02.334289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.329 [2024-11-26 18:27:02.334313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.588 [2024-11-26 18:27:02.346432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.588 [2024-11-26 18:27:02.346827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.588 [2024-11-26 18:27:02.346855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.588 [2024-11-26 18:27:02.346871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.588 [2024-11-26 18:27:02.347088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.588 [2024-11-26 18:27:02.347293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.588 [2024-11-26 18:27:02.347340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.588 [2024-11-26 18:27:02.347355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.588 [2024-11-26 18:27:02.347367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.588 [2024-11-26 18:27:02.359448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.588 [2024-11-26 18:27:02.359770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.588 [2024-11-26 18:27:02.359799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.588 [2024-11-26 18:27:02.359815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.588 [2024-11-26 18:27:02.360034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.588 [2024-11-26 18:27:02.360239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.588 [2024-11-26 18:27:02.360259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.588 [2024-11-26 18:27:02.360272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.588 [2024-11-26 18:27:02.360284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.588 [2024-11-26 18:27:02.372571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.372944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.372978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.372994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.373214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.373464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.373486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.373500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.373512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.385705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.386109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.386138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.386153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.386386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.386587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.386622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.386634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.386646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.398811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.399218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.399246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.399262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.399541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.399770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.399791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.399803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.399815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.412021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.412366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.412395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.412411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.412651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.412856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.412876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.412888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.412900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.425207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.425640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.425685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.425701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.425938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.426142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.426161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.426173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.426186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.438295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.438745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.438774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.438790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.439037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.439226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.439245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.439258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.439269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.451597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.451956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.451984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.452000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.452236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.452471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.452492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.452510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.452522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.464864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.465217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.465246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.465262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.465530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.465754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.465775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.465787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.465799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.477990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.478397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.478426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.478442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.478681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.478887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.478908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.478921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.478932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.491228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.589 [2024-11-26 18:27:02.491547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.589 [2024-11-26 18:27:02.491576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.589 [2024-11-26 18:27:02.491592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.589 [2024-11-26 18:27:02.491810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.589 [2024-11-26 18:27:02.492015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.589 [2024-11-26 18:27:02.492036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.589 [2024-11-26 18:27:02.492048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.589 [2024-11-26 18:27:02.492060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.589 [2024-11-26 18:27:02.504472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.590 [2024-11-26 18:27:02.504813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.590 [2024-11-26 18:27:02.504842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.590 [2024-11-26 18:27:02.504857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.590 [2024-11-26 18:27:02.505076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.590 [2024-11-26 18:27:02.505280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.590 [2024-11-26 18:27:02.505300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.590 [2024-11-26 18:27:02.505349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.590 [2024-11-26 18:27:02.505362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.590 [2024-11-26 18:27:02.517931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.590 [2024-11-26 18:27:02.518347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.590 [2024-11-26 18:27:02.518377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.590 [2024-11-26 18:27:02.518394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.590 [2024-11-26 18:27:02.518625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.590 [2024-11-26 18:27:02.518839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.590 [2024-11-26 18:27:02.518858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.590 [2024-11-26 18:27:02.518871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.590 [2024-11-26 18:27:02.518883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.590 [2024-11-26 18:27:02.531298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.590 [2024-11-26 18:27:02.531692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.590 [2024-11-26 18:27:02.531721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.590 [2024-11-26 18:27:02.531737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.590 [2024-11-26 18:27:02.531980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.590 [2024-11-26 18:27:02.532191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.590 [2024-11-26 18:27:02.532211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.590 [2024-11-26 18:27:02.532223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.590 [2024-11-26 18:27:02.532235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.590 [2024-11-26 18:27:02.544666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.590 [2024-11-26 18:27:02.545052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.590 [2024-11-26 18:27:02.545111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.590 [2024-11-26 18:27:02.545127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.590 [2024-11-26 18:27:02.545388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.590 [2024-11-26 18:27:02.545630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.590 [2024-11-26 18:27:02.545666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.590 [2024-11-26 18:27:02.545678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.590 [2024-11-26 18:27:02.545690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.590 [2024-11-26 18:27:02.557897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.590 [2024-11-26 18:27:02.558242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.590 [2024-11-26 18:27:02.558270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.590 [2024-11-26 18:27:02.558286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.590 [2024-11-26 18:27:02.558551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.590 [2024-11-26 18:27:02.558775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.590 [2024-11-26 18:27:02.558794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.590 [2024-11-26 18:27:02.558806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.590 [2024-11-26 18:27:02.558818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.590 [2024-11-26 18:27:02.571115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.590 [2024-11-26 18:27:02.571596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.590 [2024-11-26 18:27:02.571649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.590 [2024-11-26 18:27:02.571689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.590 [2024-11-26 18:27:02.571972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.590 [2024-11-26 18:27:02.572236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.590 [2024-11-26 18:27:02.572265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.590 [2024-11-26 18:27:02.572312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.590 [2024-11-26 18:27:02.572336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.590 [2024-11-26 18:27:02.584962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.590 [2024-11-26 18:27:02.585347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.590 [2024-11-26 18:27:02.585380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.590 [2024-11-26 18:27:02.585398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.590 [2024-11-26 18:27:02.585647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.590 [2024-11-26 18:27:02.585837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.590 [2024-11-26 18:27:02.585857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.590 [2024-11-26 18:27:02.585869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.590 [2024-11-26 18:27:02.585881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.598620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.598930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.598973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.598990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.599206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.599454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.599477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.599491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.599504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.611863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.612205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.612233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.612249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.612514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.612753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.612788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.612801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.612813] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.625113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.625461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.625490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.625506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.625762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.625951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.625970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.625987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.625999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.638247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.638620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.638664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.638680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.638916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.639120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.639139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.639151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.639162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.651436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.651781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.651810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.651826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.652062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.652267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.652287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.652299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.652344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.664691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.665097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.665125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.665141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.665388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.665604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.665624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.665637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.665648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.677937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.678279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.678328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.678347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.678587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.678792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.678811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.678823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.678835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.691097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.691432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.691461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.691477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.691700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.691905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.691925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.691937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.691948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.704227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.704561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.704590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.704606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.704838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.705043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.850 [2024-11-26 18:27:02.705063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.850 [2024-11-26 18:27:02.705075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.850 [2024-11-26 18:27:02.705086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.850 [2024-11-26 18:27:02.717392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.850 [2024-11-26 18:27:02.717782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.850 [2024-11-26 18:27:02.717815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.850 [2024-11-26 18:27:02.717832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.850 [2024-11-26 18:27:02.718068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.850 [2024-11-26 18:27:02.718272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.718292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.718312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.718342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.730647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.730988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.731014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.731030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.731259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.731462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.731482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.731495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.731507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.743887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.744291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.744342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.744359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.744601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.744805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.744824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.744836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.744848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.757132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.757512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.757555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.757571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.757808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.758013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.758032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.758044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.758055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.770338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.770653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.770681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.770697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.770914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.771118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.771138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.771149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.771161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.783479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.783869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.783896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.783912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.784131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.784379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.784401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.784414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.784426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.796713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.797057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.797084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.797100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.797347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.797548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.797568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.797586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.797599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.809890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.810201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.810228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.810243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.810524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.810749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.810769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.810781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.810792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.823008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.823434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.823474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.823501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.823802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.824059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.824088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.824109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.824127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.837123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.837540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.837572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.837604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.837821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.838027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.851 [2024-11-26 18:27:02.838046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.851 [2024-11-26 18:27:02.838059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.851 [2024-11-26 18:27:02.838070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:14.851 [2024-11-26 18:27:02.850469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:14.851 [2024-11-26 18:27:02.850830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.851 [2024-11-26 18:27:02.850858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:14.851 [2024-11-26 18:27:02.850873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:14.851 [2024-11-26 18:27:02.851105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:14.851 [2024-11-26 18:27:02.851322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:14.852 [2024-11-26 18:27:02.851343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:14.852 [2024-11-26 18:27:02.851373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:14.852 [2024-11-26 18:27:02.851385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.111 [2024-11-26 18:27:02.863897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.111 [2024-11-26 18:27:02.864265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.111 [2024-11-26 18:27:02.864316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.111 [2024-11-26 18:27:02.864335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.111 [2024-11-26 18:27:02.864566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.111 [2024-11-26 18:27:02.864788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.111 [2024-11-26 18:27:02.864807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.111 [2024-11-26 18:27:02.864820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.111 [2024-11-26 18:27:02.864831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.111 [2024-11-26 18:27:02.877143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.111 [2024-11-26 18:27:02.877586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.111 [2024-11-26 18:27:02.877630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.111 [2024-11-26 18:27:02.877647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.111 [2024-11-26 18:27:02.877882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.111 [2024-11-26 18:27:02.878086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.111 [2024-11-26 18:27:02.878105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.111 [2024-11-26 18:27:02.878118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.111 [2024-11-26 18:27:02.878129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.111 [2024-11-26 18:27:02.890417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.111 [2024-11-26 18:27:02.890751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.111 [2024-11-26 18:27:02.890797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.111 [2024-11-26 18:27:02.890813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.111 [2024-11-26 18:27:02.891029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.111 [2024-11-26 18:27:02.891235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.111 [2024-11-26 18:27:02.891254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.111 [2024-11-26 18:27:02.891266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.111 [2024-11-26 18:27:02.891277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.111 [2024-11-26 18:27:02.903597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.111 [2024-11-26 18:27:02.903905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.111 [2024-11-26 18:27:02.903993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.111 [2024-11-26 18:27:02.904009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.111 [2024-11-26 18:27:02.904239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.111 [2024-11-26 18:27:02.904476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.111 [2024-11-26 18:27:02.904498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.111 [2024-11-26 18:27:02.904511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.111 [2024-11-26 18:27:02.904523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.111 [2024-11-26 18:27:02.916828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.111 [2024-11-26 18:27:02.917184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.111 [2024-11-26 18:27:02.917212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.111 [2024-11-26 18:27:02.917227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.111 [2024-11-26 18:27:02.917500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.111 [2024-11-26 18:27:02.917727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.111 [2024-11-26 18:27:02.917746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.111 [2024-11-26 18:27:02.917758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.111 [2024-11-26 18:27:02.917770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.111 [2024-11-26 18:27:02.929978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.111 [2024-11-26 18:27:02.930314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.111 [2024-11-26 18:27:02.930342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.111 [2024-11-26 18:27:02.930358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.111 [2024-11-26 18:27:02.930590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.111 [2024-11-26 18:27:02.930795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.111 [2024-11-26 18:27:02.930814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.111 [2024-11-26 18:27:02.930826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.111 [2024-11-26 18:27:02.930837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.111 [2024-11-26 18:27:02.943176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.111 [2024-11-26 18:27:02.943543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.111 [2024-11-26 18:27:02.943571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:02.943587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:02.943822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:02.944027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:02.944046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:02.944058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:02.944069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:02.956379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:02.956751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:02.956779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:02.956794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:02.957030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:02.957219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:02.957238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:02.957250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:02.957261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:02.969615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:02.969957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:02.969984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:02.970000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:02.970230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:02.970466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:02.970487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:02.970505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:02.970517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:02.982794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:02.983175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:02.983202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:02.983217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:02.983497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:02.983724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:02.983743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:02.983755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:02.983767] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:02.995985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:02.996294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:02.996344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:02.996361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:02.996590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:02.996813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:02.996832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:02.996844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:02.996856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:03.009077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:03.009454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:03.009483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:03.009499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:03.009722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:03.009927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:03.009946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:03.009958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:03.009969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:03.022228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:03.022661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:03.022690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:03.022705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:03.022942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:03.023147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:03.023167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:03.023179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:03.023190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:03.035504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:03.035892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:03.035966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:03.035982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:03.036213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:03.036449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:03.036470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:03.036483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:03.036495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:03.048701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:03.049111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:03.049163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:03.049179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:03.049439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:03.049659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:03.049679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:03.049691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:03.049702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:03.061976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:03.062366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:03.062400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.112 [2024-11-26 18:27:03.062417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.112 [2024-11-26 18:27:03.062639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.112 [2024-11-26 18:27:03.062863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.112 [2024-11-26 18:27:03.062882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.112 [2024-11-26 18:27:03.062894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.112 [2024-11-26 18:27:03.062906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.112 [2024-11-26 18:27:03.075143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.112 [2024-11-26 18:27:03.075592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.112 [2024-11-26 18:27:03.075621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.113 [2024-11-26 18:27:03.075638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.113 [2024-11-26 18:27:03.075889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.113 [2024-11-26 18:27:03.076104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.113 [2024-11-26 18:27:03.076125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.113 [2024-11-26 18:27:03.076138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.113 [2024-11-26 18:27:03.076150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.113 [2024-11-26 18:27:03.089275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.113 [2024-11-26 18:27:03.089636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.113 [2024-11-26 18:27:03.089666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.113 [2024-11-26 18:27:03.089683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.113 [2024-11-26 18:27:03.089907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.113 [2024-11-26 18:27:03.090113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.113 [2024-11-26 18:27:03.090133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.113 [2024-11-26 18:27:03.090145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.113 [2024-11-26 18:27:03.090157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.113 [2024-11-26 18:27:03.102468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.113 [2024-11-26 18:27:03.102884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.113 [2024-11-26 18:27:03.102938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.113 [2024-11-26 18:27:03.102954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.113 [2024-11-26 18:27:03.103198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.113 [2024-11-26 18:27:03.103419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.113 [2024-11-26 18:27:03.103441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.113 [2024-11-26 18:27:03.103454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.113 [2024-11-26 18:27:03.103466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.113 [2024-11-26 18:27:03.115749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.113 [2024-11-26 18:27:03.116195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.113 [2024-11-26 18:27:03.116225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.113 [2024-11-26 18:27:03.116242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.113 [2024-11-26 18:27:03.116486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.113 [2024-11-26 18:27:03.116732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.113 [2024-11-26 18:27:03.116751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.113 [2024-11-26 18:27:03.116764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.113 [2024-11-26 18:27:03.116775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.372 [2024-11-26 18:27:03.128877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.372 [2024-11-26 18:27:03.129190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.372 [2024-11-26 18:27:03.129218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.372 [2024-11-26 18:27:03.129234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.372 [2024-11-26 18:27:03.129516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.372 [2024-11-26 18:27:03.129745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.372 [2024-11-26 18:27:03.129765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.372 [2024-11-26 18:27:03.129777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.372 [2024-11-26 18:27:03.129788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.372 [2024-11-26 18:27:03.142049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.372 [2024-11-26 18:27:03.142454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.372 [2024-11-26 18:27:03.142483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.372 [2024-11-26 18:27:03.142498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.372 [2024-11-26 18:27:03.142729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.372 [2024-11-26 18:27:03.142934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.372 [2024-11-26 18:27:03.142953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.372 [2024-11-26 18:27:03.142970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.372 [2024-11-26 18:27:03.142982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.372 [2024-11-26 18:27:03.155354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.372 [2024-11-26 18:27:03.155721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.372 [2024-11-26 18:27:03.155762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.372 [2024-11-26 18:27:03.155778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.372 [2024-11-26 18:27:03.155996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.372 [2024-11-26 18:27:03.156202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.372 [2024-11-26 18:27:03.156221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.372 [2024-11-26 18:27:03.156233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.372 [2024-11-26 18:27:03.156245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.372 [2024-11-26 18:27:03.168517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.372 [2024-11-26 18:27:03.168940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.372 [2024-11-26 18:27:03.168968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.372 [2024-11-26 18:27:03.168984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.372 [2024-11-26 18:27:03.169220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.372 [2024-11-26 18:27:03.169459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.372 [2024-11-26 18:27:03.169481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.372 [2024-11-26 18:27:03.169495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.372 [2024-11-26 18:27:03.169507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.372 [2024-11-26 18:27:03.181760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.372 [2024-11-26 18:27:03.182166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.372 [2024-11-26 18:27:03.182195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.372 [2024-11-26 18:27:03.182211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.372 [2024-11-26 18:27:03.182476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.372 [2024-11-26 18:27:03.182703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.372 [2024-11-26 18:27:03.182722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.372 [2024-11-26 18:27:03.182734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.372 [2024-11-26 18:27:03.182746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.372 [2024-11-26 18:27:03.195004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.372 [2024-11-26 18:27:03.195361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.372 [2024-11-26 18:27:03.195391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.372 [2024-11-26 18:27:03.195408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.372 [2024-11-26 18:27:03.195650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.195854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.195874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.195886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.195897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.208225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.208614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.208642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.208658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.208877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.209085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.209104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.209117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.209128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 5225.75 IOPS, 20.41 MiB/s [2024-11-26T17:27:03.384Z] [2024-11-26 18:27:03.221408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.221779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.221806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.221821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.222051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.222257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.222276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.222312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.222327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.234647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.234994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.235027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.235045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.235286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.235526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.235547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.235560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.235572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.247880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.248229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.248258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.248275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.248530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.248777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.248797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.248809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.248821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.261423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.261770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.261800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.261816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.262034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.262241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.262261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.262272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.262300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.274714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.275120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.275148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.275164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.275420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.275671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.275691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.275703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.275715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.287968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.288258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.288308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.288326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.288586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.288835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.288856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.288869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.288881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.301416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.301867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.301896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.301912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.302157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.302391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.302414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.302428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.302442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.315000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.315387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.373 [2024-11-26 18:27:03.315417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.373 [2024-11-26 18:27:03.315434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.373 [2024-11-26 18:27:03.315664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.373 [2024-11-26 18:27:03.315894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.373 [2024-11-26 18:27:03.315919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.373 [2024-11-26 18:27:03.315933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.373 [2024-11-26 18:27:03.315945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.373 [2024-11-26 18:27:03.328730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.373 [2024-11-26 18:27:03.329112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.374 [2024-11-26 18:27:03.329141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.374 [2024-11-26 18:27:03.329158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.374 [2024-11-26 18:27:03.329383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.374 [2024-11-26 18:27:03.329620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.374 [2024-11-26 18:27:03.329642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.374 [2024-11-26 18:27:03.329656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.374 [2024-11-26 18:27:03.329668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.374 [2024-11-26 18:27:03.342076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.374 [2024-11-26 18:27:03.342383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.374 [2024-11-26 18:27:03.342415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.374 [2024-11-26 18:27:03.342432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.374 [2024-11-26 18:27:03.342663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.374 [2024-11-26 18:27:03.342868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.374 [2024-11-26 18:27:03.342888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.374 [2024-11-26 18:27:03.342900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.374 [2024-11-26 18:27:03.342911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.374 [2024-11-26 18:27:03.355247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.374 [2024-11-26 18:27:03.355627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.374 [2024-11-26 18:27:03.355671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.374 [2024-11-26 18:27:03.355687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.374 [2024-11-26 18:27:03.355920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.374 [2024-11-26 18:27:03.356125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.374 [2024-11-26 18:27:03.356144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.374 [2024-11-26 18:27:03.356157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.374 [2024-11-26 18:27:03.356169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.374 [2024-11-26 18:27:03.368843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.374 [2024-11-26 18:27:03.369231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.374 [2024-11-26 18:27:03.369260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.374 [2024-11-26 18:27:03.369275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.374 [2024-11-26 18:27:03.369515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.374 [2024-11-26 18:27:03.369757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.374 [2024-11-26 18:27:03.369778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.374 [2024-11-26 18:27:03.369791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.374 [2024-11-26 18:27:03.369803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.634 [2024-11-26 18:27:03.382727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.634 [2024-11-26 18:27:03.383095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.634 [2024-11-26 18:27:03.383134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.634 [2024-11-26 18:27:03.383167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.634 [2024-11-26 18:27:03.383392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.634 [2024-11-26 18:27:03.383627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.634 [2024-11-26 18:27:03.383647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.634 [2024-11-26 18:27:03.383675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.634 [2024-11-26 18:27:03.383687] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.634 [2024-11-26 18:27:03.395997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.634 [2024-11-26 18:27:03.396350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.634 [2024-11-26 18:27:03.396388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.634 [2024-11-26 18:27:03.396406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.634 [2024-11-26 18:27:03.396639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.634 [2024-11-26 18:27:03.396862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.634 [2024-11-26 18:27:03.396882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.634 [2024-11-26 18:27:03.396894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.634 [2024-11-26 18:27:03.396906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.634 [2024-11-26 18:27:03.409356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.634 [2024-11-26 18:27:03.409769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.634 [2024-11-26 18:27:03.409823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.634 [2024-11-26 18:27:03.409840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.634 [2024-11-26 18:27:03.410074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.634 [2024-11-26 18:27:03.410278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.634 [2024-11-26 18:27:03.410322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.634 [2024-11-26 18:27:03.410336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.634 [2024-11-26 18:27:03.410348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.634 [2024-11-26 18:27:03.422579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.634 [2024-11-26 18:27:03.422969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.634 [2024-11-26 18:27:03.423018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.634 [2024-11-26 18:27:03.423034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.634 [2024-11-26 18:27:03.423269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.634 [2024-11-26 18:27:03.423504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.634 [2024-11-26 18:27:03.423525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.634 [2024-11-26 18:27:03.423538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.634 [2024-11-26 18:27:03.423550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.634 [2024-11-26 18:27:03.435798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.634 [2024-11-26 18:27:03.436111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.634 [2024-11-26 18:27:03.436139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.634 [2024-11-26 18:27:03.436155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.634 [2024-11-26 18:27:03.436401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.634 [2024-11-26 18:27:03.436618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.634 [2024-11-26 18:27:03.436652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.634 [2024-11-26 18:27:03.436665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.634 [2024-11-26 18:27:03.436677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.634 [2024-11-26 18:27:03.449059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.634 [2024-11-26 18:27:03.449466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.634 [2024-11-26 18:27:03.449495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.634 [2024-11-26 18:27:03.449511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.634 [2024-11-26 18:27:03.449752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.634 [2024-11-26 18:27:03.449957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.634 [2024-11-26 18:27:03.449977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.634 [2024-11-26 18:27:03.449988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.634 [2024-11-26 18:27:03.450000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.634 [2024-11-26 18:27:03.462285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.634 [2024-11-26 18:27:03.462658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.634 [2024-11-26 18:27:03.462687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.634 [2024-11-26 18:27:03.462703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.462939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.463133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.463153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.463166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.463177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.475460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.475884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.475913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.475928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.476163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.476412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.476433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.476446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.476458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.488527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.488933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.488960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.488976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.489213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.489463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.489489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.489503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.489515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.501655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.502011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.502040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.502057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.502293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.502495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.502515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.502527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.502540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.514771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.515128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.515179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.515195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.515469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.515691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.515712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.515724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.515736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.527801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.528175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.528204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.528219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.528487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.528704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.528725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.528739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.528751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.540883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.541272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.541335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.541351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.541592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.541782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.541803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.541815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.541827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.554015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.554421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.554450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.554466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.554697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.554903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.554923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.554936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.554948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.567226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.567639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.567668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.567685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.567921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.568126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.568146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.568159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.568170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.580384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.580724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.635 [2024-11-26 18:27:03.580771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.635 [2024-11-26 18:27:03.580796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.635 [2024-11-26 18:27:03.581072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.635 [2024-11-26 18:27:03.581356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.635 [2024-11-26 18:27:03.581386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.635 [2024-11-26 18:27:03.581408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.635 [2024-11-26 18:27:03.581432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.635 [2024-11-26 18:27:03.594842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.635 [2024-11-26 18:27:03.595265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.636 [2024-11-26 18:27:03.595333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.636 [2024-11-26 18:27:03.595367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.636 [2024-11-26 18:27:03.595611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.636 [2024-11-26 18:27:03.595838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.636 [2024-11-26 18:27:03.595860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.636 [2024-11-26 18:27:03.595873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.636 [2024-11-26 18:27:03.595886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.636 [2024-11-26 18:27:03.608185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.636 [2024-11-26 18:27:03.608546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.636 [2024-11-26 18:27:03.608600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.636 [2024-11-26 18:27:03.608617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.636 [2024-11-26 18:27:03.608849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.636 [2024-11-26 18:27:03.609043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.636 [2024-11-26 18:27:03.609064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.636 [2024-11-26 18:27:03.609077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.636 [2024-11-26 18:27:03.609089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.636 [2024-11-26 18:27:03.621502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.636 [2024-11-26 18:27:03.621899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.636 [2024-11-26 18:27:03.621942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.636 [2024-11-26 18:27:03.621959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.636 [2024-11-26 18:27:03.622195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.636 [2024-11-26 18:27:03.622430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.636 [2024-11-26 18:27:03.622453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.636 [2024-11-26 18:27:03.622469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.636 [2024-11-26 18:27:03.622482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.636 [2024-11-26 18:27:03.634755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.636 [2024-11-26 18:27:03.635098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.636 [2024-11-26 18:27:03.635127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.636 [2024-11-26 18:27:03.635143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.636 [2024-11-26 18:27:03.635397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.636 [2024-11-26 18:27:03.635637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.636 [2024-11-26 18:27:03.635658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.636 [2024-11-26 18:27:03.635686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.636 [2024-11-26 18:27:03.635699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.896 [2024-11-26 18:27:03.648129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.896 [2024-11-26 18:27:03.648511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.896 [2024-11-26 18:27:03.648541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.896 [2024-11-26 18:27:03.648557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.896 [2024-11-26 18:27:03.648789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.896 [2024-11-26 18:27:03.649030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.896 [2024-11-26 18:27:03.649050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.896 [2024-11-26 18:27:03.649063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.896 [2024-11-26 18:27:03.649075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.896 [2024-11-26 18:27:03.661213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.896 [2024-11-26 18:27:03.661563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.896 [2024-11-26 18:27:03.661593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.896 [2024-11-26 18:27:03.661609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.896 [2024-11-26 18:27:03.661840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.896 [2024-11-26 18:27:03.662044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.896 [2024-11-26 18:27:03.662064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.896 [2024-11-26 18:27:03.662081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.896 [2024-11-26 18:27:03.662094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.896 [2024-11-26 18:27:03.674298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.896 [2024-11-26 18:27:03.674598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.896 [2024-11-26 18:27:03.674640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.896 [2024-11-26 18:27:03.674683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.896 [2024-11-26 18:27:03.674909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.896 [2024-11-26 18:27:03.675138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.896 [2024-11-26 18:27:03.675159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.896 [2024-11-26 18:27:03.675171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.896 [2024-11-26 18:27:03.675183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.896 [2024-11-26 18:27:03.687762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.896 [2024-11-26 18:27:03.688112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.896 [2024-11-26 18:27:03.688141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.896 [2024-11-26 18:27:03.688157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.896 [2024-11-26 18:27:03.688398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.896 [2024-11-26 18:27:03.688633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.896 [2024-11-26 18:27:03.688669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.896 [2024-11-26 18:27:03.688682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.896 [2024-11-26 18:27:03.688694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.896 [2024-11-26 18:27:03.701159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.896 [2024-11-26 18:27:03.701504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.896 [2024-11-26 18:27:03.701534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.896 [2024-11-26 18:27:03.701551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.896 [2024-11-26 18:27:03.701793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.896 [2024-11-26 18:27:03.702004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.896 [2024-11-26 18:27:03.702024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.896 [2024-11-26 18:27:03.702037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.896 [2024-11-26 18:27:03.702049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.896 [2024-11-26 18:27:03.714512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.896 [2024-11-26 18:27:03.714905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.896 [2024-11-26 18:27:03.714934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.896 [2024-11-26 18:27:03.714950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.896 [2024-11-26 18:27:03.715187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.896 [2024-11-26 18:27:03.715422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.896 [2024-11-26 18:27:03.715444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.896 [2024-11-26 18:27:03.715458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.896 [2024-11-26 18:27:03.715470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.896 [2024-11-26 18:27:03.727694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.896 [2024-11-26 18:27:03.728039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.728068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.728084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.728334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.728548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.728570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.728584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.728610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.740897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.741311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.741341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.741357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.741594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.741798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.741818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.741831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.741843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.754096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.754457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.754491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.754508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.754745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.754949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.754969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.754981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.754993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.767239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.767593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.767622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.767638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.767877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.768080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.768101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.768113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.768124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.780563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.780989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.781017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.781033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.781250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.781487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.781509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.781522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.781535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.793911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.794266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.794317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.794350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.794600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.794806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.794827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.794839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.794851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.807116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.807453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.807482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.807498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.807732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.807935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.807956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.807968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.807980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.820137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.820552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.820580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.820596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.820829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.821034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.821052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.821064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.821076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.833357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.833789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.833827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.833854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.834139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.834416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.834447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.834474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.834497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.847436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.847814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.897 [2024-11-26 18:27:03.847885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.897 [2024-11-26 18:27:03.847902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.897 [2024-11-26 18:27:03.848143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.897 [2024-11-26 18:27:03.848628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.897 [2024-11-26 18:27:03.848666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.897 [2024-11-26 18:27:03.848679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.897 [2024-11-26 18:27:03.848692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.897 [2024-11-26 18:27:03.860685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.897 [2024-11-26 18:27:03.861033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.898 [2024-11-26 18:27:03.861062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.898 [2024-11-26 18:27:03.861078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.898 [2024-11-26 18:27:03.861326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.898 [2024-11-26 18:27:03.861540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.898 [2024-11-26 18:27:03.861562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.898 [2024-11-26 18:27:03.861575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.898 [2024-11-26 18:27:03.861602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.898 [2024-11-26 18:27:03.873690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.898 [2024-11-26 18:27:03.874036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.898 [2024-11-26 18:27:03.874065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.898 [2024-11-26 18:27:03.874080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.898 [2024-11-26 18:27:03.874329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.898 [2024-11-26 18:27:03.874529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.898 [2024-11-26 18:27:03.874550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.898 [2024-11-26 18:27:03.874563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.898 [2024-11-26 18:27:03.874576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.898 [2024-11-26 18:27:03.886815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.898 [2024-11-26 18:27:03.887220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.898 [2024-11-26 18:27:03.887248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.898 [2024-11-26 18:27:03.887264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.898 [2024-11-26 18:27:03.887527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.898 [2024-11-26 18:27:03.887735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.898 [2024-11-26 18:27:03.887755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.898 [2024-11-26 18:27:03.887768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.898 [2024-11-26 18:27:03.887780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:15.898 [2024-11-26 18:27:03.899989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:15.898 [2024-11-26 18:27:03.900329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.898 [2024-11-26 18:27:03.900359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:15.898 [2024-11-26 18:27:03.900375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:15.898 [2024-11-26 18:27:03.900654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:15.898 [2024-11-26 18:27:03.900871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:15.898 [2024-11-26 18:27:03.900893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:15.898 [2024-11-26 18:27:03.900921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:15.898 [2024-11-26 18:27:03.900933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:03.913308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:03.913660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:03.913690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:03.913707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:03.913942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:03.914148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.162 [2024-11-26 18:27:03.914169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.162 [2024-11-26 18:27:03.914181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.162 [2024-11-26 18:27:03.914192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:03.926446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:03.926766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:03.926799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:03.926815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:03.927036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:03.927241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.162 [2024-11-26 18:27:03.927261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.162 [2024-11-26 18:27:03.927274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.162 [2024-11-26 18:27:03.927286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:03.939509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:03.939917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:03.939945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:03.939961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:03.940201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:03.940454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.162 [2024-11-26 18:27:03.940478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.162 [2024-11-26 18:27:03.940491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.162 [2024-11-26 18:27:03.940505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:03.952599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:03.952910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:03.952938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:03.952954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:03.953173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:03.953428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.162 [2024-11-26 18:27:03.953451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.162 [2024-11-26 18:27:03.953465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.162 [2024-11-26 18:27:03.953478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:03.965738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:03.966145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:03.966174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:03.966190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:03.966464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:03.966679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.162 [2024-11-26 18:27:03.966699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.162 [2024-11-26 18:27:03.966711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.162 [2024-11-26 18:27:03.966722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:03.978884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:03.979227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:03.979256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:03.979272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:03.979538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:03.979761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.162 [2024-11-26 18:27:03.979782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.162 [2024-11-26 18:27:03.979794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.162 [2024-11-26 18:27:03.979806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:03.991980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:03.992327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:03.992356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:03.992372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:03.992609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:03.992814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.162 [2024-11-26 18:27:03.992835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.162 [2024-11-26 18:27:03.992847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.162 [2024-11-26 18:27:03.992859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:04.005043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:04.005419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:04.005449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:04.005466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:04.005691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:04.005895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.162 [2024-11-26 18:27:04.005915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.162 [2024-11-26 18:27:04.005932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.162 [2024-11-26 18:27:04.005946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.162 [2024-11-26 18:27:04.018206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.162 [2024-11-26 18:27:04.018619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.162 [2024-11-26 18:27:04.018648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.162 [2024-11-26 18:27:04.018664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.162 [2024-11-26 18:27:04.018901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.162 [2024-11-26 18:27:04.019106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.019126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.019139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.019152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.031348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.031748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.031777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.031793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.032030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.032235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.032256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.032268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.032279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.044446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.044790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.044819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.044835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.045072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.045277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.045298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.045337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.045352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.057514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.057888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.057917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.057932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.058151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.058396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.058418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.058432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.058445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.070657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.071013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.071056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.071072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.071292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.071513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.071534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.071546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.071558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.083887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.084277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.084339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.084367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.084669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.084940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.084968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.084989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.085008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.098053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.098418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.098469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.098488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.098737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.098942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.098963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.098976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.098988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.111264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.111621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.111650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.111665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.111862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.112082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.112103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.112116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.112127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.124438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.124841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.124870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.124885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.125105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.125336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.125357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.125370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.125382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.137589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.137993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.138022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.138037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.138274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.138508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.163 [2024-11-26 18:27:04.138529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.163 [2024-11-26 18:27:04.138543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.163 [2024-11-26 18:27:04.138555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.163 [2024-11-26 18:27:04.150712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.163 [2024-11-26 18:27:04.151120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.163 [2024-11-26 18:27:04.151149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.163 [2024-11-26 18:27:04.151164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.163 [2024-11-26 18:27:04.151412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.163 [2024-11-26 18:27:04.151642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.164 [2024-11-26 18:27:04.151664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.164 [2024-11-26 18:27:04.151676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.164 [2024-11-26 18:27:04.151688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.164 [2024-11-26 18:27:04.164122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.164 [2024-11-26 18:27:04.164502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.164 [2024-11-26 18:27:04.164533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.164 [2024-11-26 18:27:04.164550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.164 [2024-11-26 18:27:04.164766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.164 [2024-11-26 18:27:04.165017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.164 [2024-11-26 18:27:04.165038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.164 [2024-11-26 18:27:04.165051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.164 [2024-11-26 18:27:04.165062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.487 [2024-11-26 18:27:04.177727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.487 [2024-11-26 18:27:04.178116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.487 [2024-11-26 18:27:04.178147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.487 [2024-11-26 18:27:04.178163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.487 [2024-11-26 18:27:04.178408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.178637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.178660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.488 [2024-11-26 18:27:04.178679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.488 [2024-11-26 18:27:04.178693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.488 [2024-11-26 18:27:04.190990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.488 [2024-11-26 18:27:04.191333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.488 [2024-11-26 18:27:04.191363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.488 [2024-11-26 18:27:04.191379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.488 [2024-11-26 18:27:04.191615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.191804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.191824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.488 [2024-11-26 18:27:04.191836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.488 [2024-11-26 18:27:04.191848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.488 [2024-11-26 18:27:04.204066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.488 [2024-11-26 18:27:04.204379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.488 [2024-11-26 18:27:04.204408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.488 [2024-11-26 18:27:04.204423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.488 [2024-11-26 18:27:04.204641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.204845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.204866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.488 [2024-11-26 18:27:04.204878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.488 [2024-11-26 18:27:04.204890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.488 [2024-11-26 18:27:04.217114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.488 [2024-11-26 18:27:04.217468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.488 [2024-11-26 18:27:04.217497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.488 [2024-11-26 18:27:04.217513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.488 [2024-11-26 18:27:04.217748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.217953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.217974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.488 [2024-11-26 18:27:04.217986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.488 [2024-11-26 18:27:04.217998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.488 4180.60 IOPS, 16.33 MiB/s [2024-11-26T17:27:04.499Z] [2024-11-26 18:27:04.230231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.488 [2024-11-26 18:27:04.230568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.488 [2024-11-26 18:27:04.230598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.488 [2024-11-26 18:27:04.230614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.488 [2024-11-26 18:27:04.230848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.231051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.231072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.488 [2024-11-26 18:27:04.231084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.488 [2024-11-26 18:27:04.231096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.488 [2024-11-26 18:27:04.243411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.488 [2024-11-26 18:27:04.243817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.488 [2024-11-26 18:27:04.243846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.488 [2024-11-26 18:27:04.243862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.488 [2024-11-26 18:27:04.244100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.244317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.244353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.488 [2024-11-26 18:27:04.244366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.488 [2024-11-26 18:27:04.244378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.488 [2024-11-26 18:27:04.256587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.488 [2024-11-26 18:27:04.256993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.488 [2024-11-26 18:27:04.257021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.488 [2024-11-26 18:27:04.257037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.488 [2024-11-26 18:27:04.257268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.257472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.257493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.488 [2024-11-26 18:27:04.257506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.488 [2024-11-26 18:27:04.257518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.488 [2024-11-26 18:27:04.269736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.488 [2024-11-26 18:27:04.270080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.488 [2024-11-26 18:27:04.270112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.488 [2024-11-26 18:27:04.270129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.488 [2024-11-26 18:27:04.270371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.270565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.270586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.488 [2024-11-26 18:27:04.270599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.488 [2024-11-26 18:27:04.270626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.488 [2024-11-26 18:27:04.282788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.488 [2024-11-26 18:27:04.283238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.488 [2024-11-26 18:27:04.283293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.488 [2024-11-26 18:27:04.283320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.488 [2024-11-26 18:27:04.283568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.488 [2024-11-26 18:27:04.283773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.488 [2024-11-26 18:27:04.283794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.283807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.283818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.489 [2024-11-26 18:27:04.295769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.489 [2024-11-26 18:27:04.296110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.489 [2024-11-26 18:27:04.296138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.489 [2024-11-26 18:27:04.296153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.489 [2024-11-26 18:27:04.296382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.489 [2024-11-26 18:27:04.296582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.489 [2024-11-26 18:27:04.296603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.296616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.296629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.489 [2024-11-26 18:27:04.309036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.489 [2024-11-26 18:27:04.309389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.489 [2024-11-26 18:27:04.309418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.489 [2024-11-26 18:27:04.309434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.489 [2024-11-26 18:27:04.309675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.489 [2024-11-26 18:27:04.309880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.489 [2024-11-26 18:27:04.309900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.309912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.309924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.489 [2024-11-26 18:27:04.322210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.489 [2024-11-26 18:27:04.322630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.489 [2024-11-26 18:27:04.322695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.489 [2024-11-26 18:27:04.322711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.489 [2024-11-26 18:27:04.322942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.489 [2024-11-26 18:27:04.323130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.489 [2024-11-26 18:27:04.323150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.323163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.323175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.489 [2024-11-26 18:27:04.335470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.489 [2024-11-26 18:27:04.335938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.489 [2024-11-26 18:27:04.336007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.489 [2024-11-26 18:27:04.336034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.489 [2024-11-26 18:27:04.336341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.489 [2024-11-26 18:27:04.336647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.489 [2024-11-26 18:27:04.336690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.336710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.336731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.489 [2024-11-26 18:27:04.349548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.489 [2024-11-26 18:27:04.349945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.489 [2024-11-26 18:27:04.349999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.489 [2024-11-26 18:27:04.350016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.489 [2024-11-26 18:27:04.350266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.489 [2024-11-26 18:27:04.350506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.489 [2024-11-26 18:27:04.350534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.350548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.350561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.489 [2024-11-26 18:27:04.362823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.489 [2024-11-26 18:27:04.363170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.489 [2024-11-26 18:27:04.363199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.489 [2024-11-26 18:27:04.363214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.489 [2024-11-26 18:27:04.363477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.489 [2024-11-26 18:27:04.363712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.489 [2024-11-26 18:27:04.363734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.363747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.363759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.489 [2024-11-26 18:27:04.375843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.489 [2024-11-26 18:27:04.376217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.489 [2024-11-26 18:27:04.376246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.489 [2024-11-26 18:27:04.376262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.489 [2024-11-26 18:27:04.376543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.489 [2024-11-26 18:27:04.376771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.489 [2024-11-26 18:27:04.376791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.376804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.376816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.489 [2024-11-26 18:27:04.388950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.489 [2024-11-26 18:27:04.389356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.489 [2024-11-26 18:27:04.389385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.489 [2024-11-26 18:27:04.389401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.489 [2024-11-26 18:27:04.389638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.489 [2024-11-26 18:27:04.389844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.489 [2024-11-26 18:27:04.389864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.489 [2024-11-26 18:27:04.389877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.489 [2024-11-26 18:27:04.389889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.490 [2024-11-26 18:27:04.402159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.490 [2024-11-26 18:27:04.402530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.490 [2024-11-26 18:27:04.402559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.490 [2024-11-26 18:27:04.402575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.490 [2024-11-26 18:27:04.402829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.490 [2024-11-26 18:27:04.403035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.490 [2024-11-26 18:27:04.403054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.490 [2024-11-26 18:27:04.403066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.490 [2024-11-26 18:27:04.403078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.490 [2024-11-26 18:27:04.415452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.490 [2024-11-26 18:27:04.415928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.490 [2024-11-26 18:27:04.415982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.490 [2024-11-26 18:27:04.415998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.490 [2024-11-26 18:27:04.416242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.490 [2024-11-26 18:27:04.416459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.490 [2024-11-26 18:27:04.416480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.490 [2024-11-26 18:27:04.416492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.490 [2024-11-26 18:27:04.416504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.490 [2024-11-26 18:27:04.428603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.490 [2024-11-26 18:27:04.428997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.490 [2024-11-26 18:27:04.429053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.490 [2024-11-26 18:27:04.429069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.490 [2024-11-26 18:27:04.429325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.490 [2024-11-26 18:27:04.429541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.490 [2024-11-26 18:27:04.429561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.490 [2024-11-26 18:27:04.429573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.490 [2024-11-26 18:27:04.429586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.490 [2024-11-26 18:27:04.441832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.490 [2024-11-26 18:27:04.442175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.490 [2024-11-26 18:27:04.442208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.490 [2024-11-26 18:27:04.442224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.490 [2024-11-26 18:27:04.442466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.490 [2024-11-26 18:27:04.442675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.490 [2024-11-26 18:27:04.442695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.490 [2024-11-26 18:27:04.442707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.490 [2024-11-26 18:27:04.442719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.490 [2024-11-26 18:27:04.455044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.490 [2024-11-26 18:27:04.455390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.490 [2024-11-26 18:27:04.455420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.490 [2024-11-26 18:27:04.455436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.490 [2024-11-26 18:27:04.455673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.490 [2024-11-26 18:27:04.455878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.490 [2024-11-26 18:27:04.455898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.490 [2024-11-26 18:27:04.455910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.490 [2024-11-26 18:27:04.455921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.490 [2024-11-26 18:27:04.468702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.490 [2024-11-26 18:27:04.469099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.490 [2024-11-26 18:27:04.469168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.490 [2024-11-26 18:27:04.469184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.490 [2024-11-26 18:27:04.469410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.490 [2024-11-26 18:27:04.469633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.490 [2024-11-26 18:27:04.469668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.490 [2024-11-26 18:27:04.469681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.490 [2024-11-26 18:27:04.469693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.772 [2024-11-26 18:27:04.482361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.772 [2024-11-26 18:27:04.482715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.772 [2024-11-26 18:27:04.482745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.772 [2024-11-26 18:27:04.482762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.772 [2024-11-26 18:27:04.482998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.772 [2024-11-26 18:27:04.483222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.772 [2024-11-26 18:27:04.483244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.772 [2024-11-26 18:27:04.483257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.772 [2024-11-26 18:27:04.483270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.772 [2024-11-26 18:27:04.495807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.772 [2024-11-26 18:27:04.496177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.772 [2024-11-26 18:27:04.496204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.772 [2024-11-26 18:27:04.496219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.772 [2024-11-26 18:27:04.496457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.772 [2024-11-26 18:27:04.496699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.772 [2024-11-26 18:27:04.496719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.772 [2024-11-26 18:27:04.496731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.772 [2024-11-26 18:27:04.496743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.772 [2024-11-26 18:27:04.509198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.772 [2024-11-26 18:27:04.509529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.772 [2024-11-26 18:27:04.509577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.772 [2024-11-26 18:27:04.509594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.772 [2024-11-26 18:27:04.509858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.772 [2024-11-26 18:27:04.510069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.772 [2024-11-26 18:27:04.510090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.772 [2024-11-26 18:27:04.510103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.772 [2024-11-26 18:27:04.510115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.772 [2024-11-26 18:27:04.522769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.772 [2024-11-26 18:27:04.523226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.772 [2024-11-26 18:27:04.523255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.772 [2024-11-26 18:27:04.523272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.772 [2024-11-26 18:27:04.523497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.772 [2024-11-26 18:27:04.523739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.772 [2024-11-26 18:27:04.523764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.772 [2024-11-26 18:27:04.523778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.772 [2024-11-26 18:27:04.523805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.772 [2024-11-26 18:27:04.536282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.772 [2024-11-26 18:27:04.536657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.772 [2024-11-26 18:27:04.536685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.772 [2024-11-26 18:27:04.536700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.772 [2024-11-26 18:27:04.536923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.772 [2024-11-26 18:27:04.537134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.772 [2024-11-26 18:27:04.537154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.772 [2024-11-26 18:27:04.537166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.772 [2024-11-26 18:27:04.537178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.772 [2024-11-26 18:27:04.549709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.772 [2024-11-26 18:27:04.550064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.772 [2024-11-26 18:27:04.550092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.772 [2024-11-26 18:27:04.550108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.772 [2024-11-26 18:27:04.550359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.772 [2024-11-26 18:27:04.550581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.772 [2024-11-26 18:27:04.550603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.772 [2024-11-26 18:27:04.550617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.772 [2024-11-26 18:27:04.550644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.772 [2024-11-26 18:27:04.562926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.772 [2024-11-26 18:27:04.563401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.772 [2024-11-26 18:27:04.563431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.772 [2024-11-26 18:27:04.563448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.772 [2024-11-26 18:27:04.563689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.772 [2024-11-26 18:27:04.563911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.772 [2024-11-26 18:27:04.563932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.772 [2024-11-26 18:27:04.563944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.563955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.576153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.576584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.576627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.576643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.576872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.577061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.577080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.577093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.577105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.589247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.589635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.589663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.589679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.589901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.590146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.590170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.590183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.590196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.603476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.603864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.603894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.603911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.604146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.604382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.604404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.604418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.604430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.616769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.617063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.617121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.617156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.617406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.617624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.617644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.617656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.617668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.629922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.630279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.630336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.630353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.630601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.630806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.630825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.630838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.630849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.643139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.643489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.643517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.643533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.643765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.643970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.643989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.644001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.644013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.656406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.656729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.656757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.656773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.656995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.657202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.657222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.657233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.657245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.669500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.669971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.670026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.670042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.670287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.670526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.670546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.670559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.670571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.682687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.773 [2024-11-26 18:27:04.683078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.773 [2024-11-26 18:27:04.683131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.773 [2024-11-26 18:27:04.683147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.773 [2024-11-26 18:27:04.683389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.773 [2024-11-26 18:27:04.683605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.773 [2024-11-26 18:27:04.683624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.773 [2024-11-26 18:27:04.683637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.773 [2024-11-26 18:27:04.683663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.773 [2024-11-26 18:27:04.695896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.774 [2024-11-26 18:27:04.696239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.774 [2024-11-26 18:27:04.696266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.774 [2024-11-26 18:27:04.696282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.774 [2024-11-26 18:27:04.696562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.774 [2024-11-26 18:27:04.696789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.774 [2024-11-26 18:27:04.696813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.774 [2024-11-26 18:27:04.696826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.774 [2024-11-26 18:27:04.696838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.774 [2024-11-26 18:27:04.709120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.774 [2024-11-26 18:27:04.709496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.774 [2024-11-26 18:27:04.709525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.774 [2024-11-26 18:27:04.709542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.774 [2024-11-26 18:27:04.709806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.774 [2024-11-26 18:27:04.709995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.774 [2024-11-26 18:27:04.710014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.774 [2024-11-26 18:27:04.710026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.774 [2024-11-26 18:27:04.710038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.774 [2024-11-26 18:27:04.722348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.774 [2024-11-26 18:27:04.722753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.774 [2024-11-26 18:27:04.722782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.774 [2024-11-26 18:27:04.722797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.774 [2024-11-26 18:27:04.723034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.774 [2024-11-26 18:27:04.723239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.774 [2024-11-26 18:27:04.723258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.774 [2024-11-26 18:27:04.723270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.774 [2024-11-26 18:27:04.723296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.774 [2024-11-26 18:27:04.735562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.774 [2024-11-26 18:27:04.735982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.774 [2024-11-26 18:27:04.736010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.774 [2024-11-26 18:27:04.736027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.774 [2024-11-26 18:27:04.736264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.774 [2024-11-26 18:27:04.736503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.774 [2024-11-26 18:27:04.736525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.774 [2024-11-26 18:27:04.736537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.774 [2024-11-26 18:27:04.736550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.774 [2024-11-26 18:27:04.748728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.774 [2024-11-26 18:27:04.749059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.774 [2024-11-26 18:27:04.749088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.774 [2024-11-26 18:27:04.749104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.774 [2024-11-26 18:27:04.749351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.774 [2024-11-26 18:27:04.749552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.774 [2024-11-26 18:27:04.749572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.774 [2024-11-26 18:27:04.749585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.774 [2024-11-26 18:27:04.749598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.774 [2024-11-26 18:27:04.761847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.774 [2024-11-26 18:27:04.762300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.774 [2024-11-26 18:27:04.762358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.774 [2024-11-26 18:27:04.762376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.774 [2024-11-26 18:27:04.762630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.774 [2024-11-26 18:27:04.762834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.774 [2024-11-26 18:27:04.762853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.774 [2024-11-26 18:27:04.762865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.774 [2024-11-26 18:27:04.762877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:16.774 [2024-11-26 18:27:04.775004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:16.774 [2024-11-26 18:27:04.775348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.774 [2024-11-26 18:27:04.775392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:16.774 [2024-11-26 18:27:04.775409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:16.774 [2024-11-26 18:27:04.775650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:16.774 [2024-11-26 18:27:04.775855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:16.774 [2024-11-26 18:27:04.775874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:16.774 [2024-11-26 18:27:04.775886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:16.774 [2024-11-26 18:27:04.775898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.034 [2024-11-26 18:27:04.788619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.034 [2024-11-26 18:27:04.789022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.034 [2024-11-26 18:27:04.789082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.034 [2024-11-26 18:27:04.789098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.034 [2024-11-26 18:27:04.789375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.034 [2024-11-26 18:27:04.789584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.034 [2024-11-26 18:27:04.789605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.034 [2024-11-26 18:27:04.789618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.034 [2024-11-26 18:27:04.789646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 719455 Killed "${NVMF_APP[@]}" "$@" 00:31:17.034 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:31:17.034 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:31:17.034 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:17.034 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:17.034 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.034 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=720528 00:31:17.034 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:17.034 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 720528 00:31:17.035 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 720528 ']' 00:31:17.035 [2024-11-26 18:27:04.802173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.035 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.035 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.035 [2024-11-26 18:27:04.802572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.802625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.802642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.035 18:27:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.035 [2024-11-26 18:27:04.802884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.803085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.803105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.803118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.035 [2024-11-26 18:27:04.803130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.035 [2024-11-26 18:27:04.815620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 [2024-11-26 18:27:04.816068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.816097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.816123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 [2024-11-26 18:27:04.816378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.816600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.816635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.816649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.035 [2024-11-26 18:27:04.816662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.035 [2024-11-26 18:27:04.829015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 [2024-11-26 18:27:04.829416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.829444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.829461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 [2024-11-26 18:27:04.829691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.829903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.829922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.829934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.035 [2024-11-26 18:27:04.829947] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.035 [2024-11-26 18:27:04.842443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 [2024-11-26 18:27:04.842873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.842912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.842940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 [2024-11-26 18:27:04.843247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.843547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.843591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.843613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.035 [2024-11-26 18:27:04.843633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.035 [2024-11-26 18:27:04.848644] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:17.035 [2024-11-26 18:27:04.848712] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.035 [2024-11-26 18:27:04.856534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 [2024-11-26 18:27:04.857002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.857034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.857051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 [2024-11-26 18:27:04.857294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.857526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.857547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.857560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.035 [2024-11-26 18:27:04.857573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.035 [2024-11-26 18:27:04.869946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 [2024-11-26 18:27:04.870312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.870343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.870359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 [2024-11-26 18:27:04.870602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.870813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.870833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.870845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.035 [2024-11-26 18:27:04.870857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.035 [2024-11-26 18:27:04.883162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 [2024-11-26 18:27:04.883503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.883548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.883564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 [2024-11-26 18:27:04.883796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.884007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.884027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.884039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.035 [2024-11-26 18:27:04.884051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.035 [2024-11-26 18:27:04.896874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 [2024-11-26 18:27:04.897237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.897266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.897312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 [2024-11-26 18:27:04.897531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.897763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.897784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.897796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.035 [2024-11-26 18:27:04.897808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.035 [2024-11-26 18:27:04.910387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.035 [2024-11-26 18:27:04.910832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.035 [2024-11-26 18:27:04.910872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.035 [2024-11-26 18:27:04.910889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.035 [2024-11-26 18:27:04.911131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.035 [2024-11-26 18:27:04.911378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.035 [2024-11-26 18:27:04.911401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.035 [2024-11-26 18:27:04.911415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:04.911429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:04.922629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:17.036 [2024-11-26 18:27:04.923796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:04.924089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:04.924131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:04.924147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:04.924397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:04.924636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:04.924671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:04.924684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:04.924695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:04.937123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:04.937715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:04.937754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:04.937784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:04.938052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:04.938251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:04.938270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:04.938296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:04.938333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:04.950559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:04.950995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:04.951024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:04.951046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:04.951286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:04.951517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:04.951539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:04.951553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:04.951565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:04.963861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:04.964284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:04.964321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:04.964340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:04.964571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:04.964799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:04.964819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:04.964833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:04.964845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:04.977140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:04.977584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:04.977627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:04.977644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:04.977885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:04.978096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:04.978115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:04.978135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:04.978148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:04.980430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.036 [2024-11-26 18:27:04.980460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.036 [2024-11-26 18:27:04.980474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.036 [2024-11-26 18:27:04.980486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.036 [2024-11-26 18:27:04.980495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.036 [2024-11-26 18:27:04.981905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:17.036 [2024-11-26 18:27:04.981964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:17.036 [2024-11-26 18:27:04.981968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.036 [2024-11-26 18:27:04.990724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:04.991166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:04.991203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:04.991222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:04.991470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:04.991707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:04.991729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:04.991745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:04.991759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:05.004240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:05.004782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:05.004822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:05.004841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:05.005095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:05.005334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:05.005357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:05.005373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:05.005389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:05.017876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:05.018391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:05.018432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:05.018466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:05.018721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:05.018933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:05.018954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:05.018969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:05.018985] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.036 [2024-11-26 18:27:05.031418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.036 [2024-11-26 18:27:05.031920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.036 [2024-11-26 18:27:05.031961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.036 [2024-11-26 18:27:05.031981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.036 [2024-11-26 18:27:05.032235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.036 [2024-11-26 18:27:05.032481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.036 [2024-11-26 18:27:05.032505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.036 [2024-11-26 18:27:05.032521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.036 [2024-11-26 18:27:05.032538] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.296 [2024-11-26 18:27:05.045255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.296 [2024-11-26 18:27:05.045773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.296 [2024-11-26 18:27:05.045811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.296 [2024-11-26 18:27:05.045830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.296 [2024-11-26 18:27:05.046071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.296 [2024-11-26 18:27:05.046327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.296 [2024-11-26 18:27:05.046365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.296 [2024-11-26 18:27:05.046383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.296 [2024-11-26 18:27:05.046399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.296 [2024-11-26 18:27:05.058781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.296 [2024-11-26 18:27:05.059251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.296 [2024-11-26 18:27:05.059314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.296 [2024-11-26 18:27:05.059337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.296 [2024-11-26 18:27:05.059576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.296 [2024-11-26 18:27:05.059816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.296 [2024-11-26 18:27:05.059837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.296 [2024-11-26 18:27:05.059853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.296 [2024-11-26 18:27:05.059868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.296 [2024-11-26 18:27:05.072348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.296 [2024-11-26 18:27:05.072722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.296 [2024-11-26 18:27:05.072751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.296 [2024-11-26 18:27:05.072767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.296 [2024-11-26 18:27:05.072998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.296 [2024-11-26 18:27:05.073223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.296 [2024-11-26 18:27:05.073244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.296 [2024-11-26 18:27:05.073258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.296 [2024-11-26 18:27:05.073271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.296 [2024-11-26 18:27:05.085860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.296 [2024-11-26 18:27:05.086203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.296 [2024-11-26 18:27:05.086232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.297 [2024-11-26 18:27:05.086249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.297 [2024-11-26 18:27:05.086475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.297 [2024-11-26 18:27:05.086711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.297 [2024-11-26 18:27:05.086733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.297 [2024-11-26 18:27:05.086746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.297 [2024-11-26 18:27:05.086759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.297 [2024-11-26 18:27:05.099488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:31:17.297 [2024-11-26 18:27:05.099846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.297 [2024-11-26 18:27:05.099896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.297 [2024-11-26 18:27:05.099922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:17.297 [2024-11-26 18:27:05.100212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.297 [2024-11-26 18:27:05.100519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.297 [2024-11-26 18:27:05.100551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.297 [2024-11-26 18:27:05.100573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.297 [2024-11-26 18:27:05.100594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.297 [2024-11-26 18:27:05.114025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.297 [2024-11-26 18:27:05.114412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.297 [2024-11-26 18:27:05.114443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.297 [2024-11-26 18:27:05.114461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.297 [2024-11-26 18:27:05.114694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.297 [2024-11-26 18:27:05.114918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.297 [2024-11-26 18:27:05.114939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.297 [2024-11-26 18:27:05.114954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.297 [2024-11-26 18:27:05.114967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.297 [2024-11-26 18:27:05.123228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:17.297 [2024-11-26 18:27:05.127540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.297 [2024-11-26 18:27:05.127886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.297 [2024-11-26 18:27:05.127916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.297 [2024-11-26 18:27:05.127932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.297 [2024-11-26 18:27:05.128148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.297 [2024-11-26 18:27:05.128415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.297 [2024-11-26 18:27:05.128439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.297 [2024-11-26 18:27:05.128453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.297 [2024-11-26 18:27:05.128466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.297 [2024-11-26 18:27:05.141185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.297 [2024-11-26 18:27:05.141608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.297 [2024-11-26 18:27:05.141640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.297 [2024-11-26 18:27:05.141667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.297 [2024-11-26 18:27:05.141901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.297 [2024-11-26 18:27:05.142119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.297 [2024-11-26 18:27:05.142139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.297 [2024-11-26 18:27:05.142154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.297 [2024-11-26 18:27:05.142167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.297 [2024-11-26 18:27:05.154775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.297 [2024-11-26 18:27:05.155115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.297 [2024-11-26 18:27:05.155144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.297 [2024-11-26 18:27:05.155161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.297 [2024-11-26 18:27:05.155386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.297 [2024-11-26 18:27:05.155621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.297 [2024-11-26 18:27:05.155643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.297 [2024-11-26 18:27:05.155658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.297 [2024-11-26 18:27:05.155671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.297 Malloc0 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.297 [2024-11-26 18:27:05.168386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.297 [2024-11-26 18:27:05.168844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.297 [2024-11-26 18:27:05.168879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.297 [2024-11-26 18:27:05.168897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.297 [2024-11-26 18:27:05.169134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.297 [2024-11-26 18:27:05.169406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.297 [2024-11-26 18:27:05.169430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.297 [2024-11-26 18:27:05.169453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.297 [2024-11-26 18:27:05.169468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.297 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:17.297 [2024-11-26 18:27:05.182029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.297 [2024-11-26 18:27:05.182438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.297 [2024-11-26 18:27:05.182469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20cfa50 with addr=10.0.0.2, port=4420 00:31:17.297 [2024-11-26 18:27:05.182485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cfa50 is same with the state(6) to be set 00:31:17.297 [2024-11-26 18:27:05.182716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cfa50 (9): Bad file descriptor 00:31:17.297 [2024-11-26 18:27:05.182930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:17.297 [2024-11-26 18:27:05.182951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:17.297 [2024-11-26 18:27:05.182965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:17.298 [2024-11-26 18:27:05.182978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:17.298 [2024-11-26 18:27:05.185433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.298 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.298 18:27:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 719671 00:31:17.298 [2024-11-26 18:27:05.195564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:17.298 3483.83 IOPS, 13.61 MiB/s [2024-11-26T17:27:05.309Z] [2024-11-26 18:27:05.226561] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:31:19.604 4179.71 IOPS, 16.33 MiB/s [2024-11-26T17:27:08.547Z] 4723.25 IOPS, 18.45 MiB/s [2024-11-26T17:27:09.480Z] 5153.00 IOPS, 20.13 MiB/s [2024-11-26T17:27:10.413Z] 5480.40 IOPS, 21.41 MiB/s [2024-11-26T17:27:11.344Z] 5748.64 IOPS, 22.46 MiB/s [2024-11-26T17:27:12.278Z] 5973.92 IOPS, 23.34 MiB/s [2024-11-26T17:27:13.651Z] 6164.77 IOPS, 24.08 MiB/s [2024-11-26T17:27:14.585Z] 6325.79 IOPS, 24.71 MiB/s [2024-11-26T17:27:14.585Z] 6470.40 IOPS, 25.27 MiB/s 00:31:26.574 Latency(us) 00:31:26.574 [2024-11-26T17:27:14.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.574 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:26.574 Verification LBA range: start 0x0 length 0x4000 00:31:26.574 Nvme1n1 : 15.01 6471.03 25.28 9894.27 0.00 7797.09 558.27 20583.16 00:31:26.574 [2024-11-26T17:27:14.585Z] =================================================================================================================== 00:31:26.574 [2024-11-26T17:27:14.585Z] Total : 6471.03 25.28 9894.27 0.00 7797.09 558.27 20583.16 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.574 rmmod nvme_tcp 00:31:26.574 rmmod nvme_fabrics 00:31:26.574 rmmod nvme_keyring 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 720528 ']' 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 720528 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 720528 ']' 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 720528 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 720528 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 720528' 00:31:26.574 killing process with pid 720528 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 720528 00:31:26.574 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 720528 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.833 18:27:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.373 00:31:29.373 real 0m22.727s 00:31:29.373 user 0m59.755s 00:31:29.373 sys 0m4.621s 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.373 ************************************ 00:31:29.373 END TEST nvmf_bdevperf 00:31:29.373 ************************************ 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:29.373 ************************************ 00:31:29.373 START TEST nvmf_target_disconnect 00:31:29.373 ************************************ 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:29.373 * Looking for test storage... 00:31:29.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:31:29.373 18:27:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:31:29.373 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.374 --rc genhtml_branch_coverage=1 00:31:29.374 --rc genhtml_function_coverage=1 00:31:29.374 --rc genhtml_legend=1 00:31:29.374 --rc geninfo_all_blocks=1 00:31:29.374 --rc geninfo_unexecuted_blocks=1 00:31:29.374 00:31:29.374 ' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.374 --rc genhtml_branch_coverage=1 00:31:29.374 --rc genhtml_function_coverage=1 00:31:29.374 --rc genhtml_legend=1 00:31:29.374 --rc geninfo_all_blocks=1 00:31:29.374 --rc geninfo_unexecuted_blocks=1 00:31:29.374 00:31:29.374 ' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.374 --rc genhtml_branch_coverage=1 00:31:29.374 --rc genhtml_function_coverage=1 00:31:29.374 --rc genhtml_legend=1 00:31:29.374 --rc geninfo_all_blocks=1 00:31:29.374 --rc geninfo_unexecuted_blocks=1 00:31:29.374 00:31:29.374 ' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.374 --rc genhtml_branch_coverage=1 00:31:29.374 --rc genhtml_function_coverage=1 00:31:29.374 --rc genhtml_legend=1 00:31:29.374 --rc geninfo_all_blocks=1 00:31:29.374 --rc geninfo_unexecuted_blocks=1 00:31:29.374 00:31:29.374 ' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:29.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.374 18:27:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:31.288 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.288 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.288 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.288 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.288 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.288 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.288 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.289 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:31.290 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:31.290 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.290 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:31.291 Found net devices under 0000:09:00.0: cvl_0_0 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:31.291 Found net devices under 0000:09:00.1: cvl_0_1 00:31:31.291 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.292 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.293 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.293 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.293 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.293 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.293 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:31:31.555 00:31:31.555 --- 10.0.0.2 ping statistics --- 00:31:31.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.555 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:31:31.555 00:31:31.555 --- 10.0.0.1 ping statistics --- 00:31:31.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.555 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:31.555 ************************************ 00:31:31.555 START TEST nvmf_target_disconnect_tc1 00:31:31.555 ************************************ 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:31.555 [2024-11-26 18:27:19.479542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.555 [2024-11-26 18:27:19.479608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x512f40 with addr=10.0.0.2, port=4420 00:31:31.555 [2024-11-26 18:27:19.479645] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:31.555 [2024-11-26 18:27:19.479674] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:31.555 [2024-11-26 18:27:19.479687] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:31:31.555 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:31.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:31.555 Initializing NVMe Controllers 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:31.555 00:31:31.555 real 0m0.099s 00:31:31.555 user 0m0.045s 00:31:31.555 sys 0m0.053s 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:31:31.555 ************************************ 00:31:31.555 END TEST nvmf_target_disconnect_tc1 00:31:31.555 ************************************ 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:31.555 ************************************ 00:31:31.555 START TEST nvmf_target_disconnect_tc2 00:31:31.555 ************************************ 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=724196 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 724196 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 724196 ']' 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.555 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.556 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.556 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.556 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:31.815 [2024-11-26 18:27:19.598192] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:31.815 [2024-11-26 18:27:19.598309] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.815 [2024-11-26 18:27:19.672739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:31.815 [2024-11-26 18:27:19.728745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.815 [2024-11-26 18:27:19.728799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.815 [2024-11-26 18:27:19.728822] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.815 [2024-11-26 18:27:19.728832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.815 [2024-11-26 18:27:19.728842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.815 [2024-11-26 18:27:19.730269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:31.815 [2024-11-26 18:27:19.730331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:31.815 [2024-11-26 18:27:19.730395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:31.815 [2024-11-26 18:27:19.730398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.073 Malloc0 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.073 [2024-11-26 18:27:19.905992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.073 [2024-11-26 18:27:19.934260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=724226 00:31:32.073 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:32.074 18:27:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:31:33.975 18:27:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 724196 00:31:33.975 18:27:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:31:33.975 Read completed with error (sct=0, sc=8) 00:31:33.975 starting I/O failed 00:31:33.975 Read completed with error (sct=0, sc=8) 00:31:33.975 starting I/O failed 00:31:33.975 Read completed with error (sct=0, sc=8) 00:31:33.975 starting I/O failed 00:31:33.975 Read completed with error (sct=0, sc=8) 00:31:33.975 starting I/O failed 00:31:33.975 Read completed with error (sct=0, sc=8) 00:31:33.975 starting I/O failed 00:31:33.975 Read completed with error (sct=0, sc=8) 00:31:33.975 starting I/O failed 00:31:33.975 Read completed with error (sct=0, sc=8) 00:31:33.975 starting I/O failed 00:31:33.975 Write completed with error (sct=0, sc=8) 00:31:33.975 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 [2024-11-26 18:27:21.959816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 [2024-11-26 18:27:21.960131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 [2024-11-26 18:27:21.960468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Write completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 Read completed with error (sct=0, sc=8) 00:31:33.976 starting I/O failed 00:31:33.976 [2024-11-26 18:27:21.960807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:33.976 [2024-11-26 18:27:21.960981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.961023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.961183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.961211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.961314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.961341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.961445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.961469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.961590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.961617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.961710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.961736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.961825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.961850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.961986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.962012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.962089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.962115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.962228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.962253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.962365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.962407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.962521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.962568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.962735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.962763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.962889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.962917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.963023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.963059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.963177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.963204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.963293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.963328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.963423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.976 [2024-11-26 18:27:21.963448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.976 qpair failed and we were unable to recover it. 00:31:33.976 [2024-11-26 18:27:21.963537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.963563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.963657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.963684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.963823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.963850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.963969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.963995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.964110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.964136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.964268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.964317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.964420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.964447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.964545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.964570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.964649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.964673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.964764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.964788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.964870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.964893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.965013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.965039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.965178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.965226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.965331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.965358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.965459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.965484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.965577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.965601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.965718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.965745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.965838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.965863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.965980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.966007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.966094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.966120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.966220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.966247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.966365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.966389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.966480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.966504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.966624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.966658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.966771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.966795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.966875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.966900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.967009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.967032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.967141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.967179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.967301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.967348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.967464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.967493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.967587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.967613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.967704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.967729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.967825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.967853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.967944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.967969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.968108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.968133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.968225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.968251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.968340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.968367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.968456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.968481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.968565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.968591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.968704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.968730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.968814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.968839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.968964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.968993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.969078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.969103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.969195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.969221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.969330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.969355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.969438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.969463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.969551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.969575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.969670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.969696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.969778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.969803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.969931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.969969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.970068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.970093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.970181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.970206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.970343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.970370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.970489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.970515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.970629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.970655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.970766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.970792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.970902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.970928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.971046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.971075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.971217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.971243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.971363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.971391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.971477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.971502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.971586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.971614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.971729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.971755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.977 qpair failed and we were unable to recover it. 00:31:33.977 [2024-11-26 18:27:21.971870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.977 [2024-11-26 18:27:21.971895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.972013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.972039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.972166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.972206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.972295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.972335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.972426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.972451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.972543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.972568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.972659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.972684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.972798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.972824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.972964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.972990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.973116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.973156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.973272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.973301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.973399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.973425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.973541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.973567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.973707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.973732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.973852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.973878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.973962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.973988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.974121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.974161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.974311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.974352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.974442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.974471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.974589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.974615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.974728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.974755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.974899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.974962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.975075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.975103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.975244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.975270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.975402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.975433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.975562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.975590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.975704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.975731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.975817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.975848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.975999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.976026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.976136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.976163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.976292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.976343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.976463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.976490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.976576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.976601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.976691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.976718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.976812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.976837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.976941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.976966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.977059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.977083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.977164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.977188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.977297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.977338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.977450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.977476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.977554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.977578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.977708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.977735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.977870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.977897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.978012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.978038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.978153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.978179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.978281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.978326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.978417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.978445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.978557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.978597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.978690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.978718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.978806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.978832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.978969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.978996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.979089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.979115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.979247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.979273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.979404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.979431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.979524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.979554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.979692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.979718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.979825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.979850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.979968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.978 [2024-11-26 18:27:21.979997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.978 qpair failed and we were unable to recover it. 00:31:33.978 [2024-11-26 18:27:21.980083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.980108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.980191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.980217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.980299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.980334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.980451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.980477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.980570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.980595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.980685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.980715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.980801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.980826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.980939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.980964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.981049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.981074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.981152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.981177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.981297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.981329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.981415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.981440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.981549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.981573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.981714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.981740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.981824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.981849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.981938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.981963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.982098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:33.979 [2024-11-26 18:27:21.982124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:33.979 qpair failed and we were unable to recover it. 00:31:33.979 [2024-11-26 18:27:21.982208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.259 [2024-11-26 18:27:21.982232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.259 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.982341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.982379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.982511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.982539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.982630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.982655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.982762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.982789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.982895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.982944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.983043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.983077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.983170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.983198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.983289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.983322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.983409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.983434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.983516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.983540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.983652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.983677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.983791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.983821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.983915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.983940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.984051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.984078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.984165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.984190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.984311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.984336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.984446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.984472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.984579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.984604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.984694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.984721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.984865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.984892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.985013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.985043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.985155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.985181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.985278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.985325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.985426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.985452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.985535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.985561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.985675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.985708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.985888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.985941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.986154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.986208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.986331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.986359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.986473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.986499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.986646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.986673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.986764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.986789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.986898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.986929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.987054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.987080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.987188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.987228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.987381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.987411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.987506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.987531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.260 qpair failed and we were unable to recover it. 00:31:34.260 [2024-11-26 18:27:21.987649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.260 [2024-11-26 18:27:21.987677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.987792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.987819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.987959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.987986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.988096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.988121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.988208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.988233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.988335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.988372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.988462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.988488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.988604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.988630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.988711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.988737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.988824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.988848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.988937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.988964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.989099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.989154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.989266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.989292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.989404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.989429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.989514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.989538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.989616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.989641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.989757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.989782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.989918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.989945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.990024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.990049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.990132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.990158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.990274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.990298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.990395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.990420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.990513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.990540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.990755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.990782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.990900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.990925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.991041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.991069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.991183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.991210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.991294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.991325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.991416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.991442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.991578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.991669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.991697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.991783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.991809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.991921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.991948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.992054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.992081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.992163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.992188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.992275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.992312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.992399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.992425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.992542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.261 [2024-11-26 18:27:21.992567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.261 qpair failed and we were unable to recover it. 00:31:34.261 [2024-11-26 18:27:21.992689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.992716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.992827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.992853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.992980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.993102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.993246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.993370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.993486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.993592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.993699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.993832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.993974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.993999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.994139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.994179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.994298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.994334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.994446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.994470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.994581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.994607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.994699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.994723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.994845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.994870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.994957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.994982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.995060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.995085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.995174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.995198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.995293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.995327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.995408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.995432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.995512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.995535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.995649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.995674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.995826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.995859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.995972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.995999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.996111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.996138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.996224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.996250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.996387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.996414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.996529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.996556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.996675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.996702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.996846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.996899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.997014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.997044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.997146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.997172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.997260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.997285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.997379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.997404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.997509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.997533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.997616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.262 [2024-11-26 18:27:21.997641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.262 qpair failed and we were unable to recover it. 00:31:34.262 [2024-11-26 18:27:21.997736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.997760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.997872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.997897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.997981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.998114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.998230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.998362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.998472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.998579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.998700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.998825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.998931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.998955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.999037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.999061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.999165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.999189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.999309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.999338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.999445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.999469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.999559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.999584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.999666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.999690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.999771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.999794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:21.999870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:21.999894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.000029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.000053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.000158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.000183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.000286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.000324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.000432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.000457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.000544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.000568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.000675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.000700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.000786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.000809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.000893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.000917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.001044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.001082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.001229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.001266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.001388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.001416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.001499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.001525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.001626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.001652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.001786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.001813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.001900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.001925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.002053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.002093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.002217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.002245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.002336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.002363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.002447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.263 [2024-11-26 18:27:22.002472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.263 qpair failed and we were unable to recover it. 00:31:34.263 [2024-11-26 18:27:22.002580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.002606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.002724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.002750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.002867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.002899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.003006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.003032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.003188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.003228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.003326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.003354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.003452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.003477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.003591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.003616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.003755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.003781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.003861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.003887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.004024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.004052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.004135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.004161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.004279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.004333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.004474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.004501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.004584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.004609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.004718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.004743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.004855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.004918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.005051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.005087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.005252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.005287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.005400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.005428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.005569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.005596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.005716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.005742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.005849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.005876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.005987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.006015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.006158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.006184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.006313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.006344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.006458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.006485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.006598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.006624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.006740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.006767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.006887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.006915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.007032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.007059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.007198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.007225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.007348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.264 [2024-11-26 18:27:22.007375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.264 qpair failed and we were unable to recover it. 00:31:34.264 [2024-11-26 18:27:22.007492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.007518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.007603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.007628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.007716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.007742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.007848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.007873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.008012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.008041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.008135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.008162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.008253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.008278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.008392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.008420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.008517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.008543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.008631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.008664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.008779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.008806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.008947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.008974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.009086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.009112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.009237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.009266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.009413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.009440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.009551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.009577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.009743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.009770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.009884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.009916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.010001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.010026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.010144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.010168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.010283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.010314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.010425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.010449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.010570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.010596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.010743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.010769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.010858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.010883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.011026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.011055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.011177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.011203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.011290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.011324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.011437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.011464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.011568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.011595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.011701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.011728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.011849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.011877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.011993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.012019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.012112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.012138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.012236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.012261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.012422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.012450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.012544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.012576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.012723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.265 [2024-11-26 18:27:22.012751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.265 qpair failed and we were unable to recover it. 00:31:34.265 [2024-11-26 18:27:22.012866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.012893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.012982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.013007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.013118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.013144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.013255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.013282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.013399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.013424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.013538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.013565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.013682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.013709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.013846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.013873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.013960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.013988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.014110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.014137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.014225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.014250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.014365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.014390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.014473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.014505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.014618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.014645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.014755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.014815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.014899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.014924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.015033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.015058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.015170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.015195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.015342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.015425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.015450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.015555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.015579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.015693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.015720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.015802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.015827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.015949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.015976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.016085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.016112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.016260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.016287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.016413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.016442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.016557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.016584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.016689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.016714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.016829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.016858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.016981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.017008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.017133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.017161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.017312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.017340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.017453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.017480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.017597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.017624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.017777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.017837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.017924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.017949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.018069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.266 [2024-11-26 18:27:22.018094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.266 qpair failed and we were unable to recover it. 00:31:34.266 [2024-11-26 18:27:22.018187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.018216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.018297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.018329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.018447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.018473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.018584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.018611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.018697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.018722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.018840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.018866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.018972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.019002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.019120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.019147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.019268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.019295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.019387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.019413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.019548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.019602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.019737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.019799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.019973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.019999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.020138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.020165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.020265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.020292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.020419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.020445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.020579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.020618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.020729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.020755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.020873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.020900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.021010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.021036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.021150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.021177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.021289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.021336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.021453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.021480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.021569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.021594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.021704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.021730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.021847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.021873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.021978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.022003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.022149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.022177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.022294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.022326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.022441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.022469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.022558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.022583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.022693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.022718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.022801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.022826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.022970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.022998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.023104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.023130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.023243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.023269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.023364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.023389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.023506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.023532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.267 qpair failed and we were unable to recover it. 00:31:34.267 [2024-11-26 18:27:22.023619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.267 [2024-11-26 18:27:22.023644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.023724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.023749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.023860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.023890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.023978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.024004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.024091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.024118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.024198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.024224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.024368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.024396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.024534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.024561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.024677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.024705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.024816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.024844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.024921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.024948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.025083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.025110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.025249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.025275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.025372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.025397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.025511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.025536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.025651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.025678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.025824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.025876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.026022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.026048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.026165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.026191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.026306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.026334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.026453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.026480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.026591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.026616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.026753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.026779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.026892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.026918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.027951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.027977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.028056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.028081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.028189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.028214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.028330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.028356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.028434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.028459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.028575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.028601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.268 qpair failed and we were unable to recover it. 00:31:34.268 [2024-11-26 18:27:22.028706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.268 [2024-11-26 18:27:22.028732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.028811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.028837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.028977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.029005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.029092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.029117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.029197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.029227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.029345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.029370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.029516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.029543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.029659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.029686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.029805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.029834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.029946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.029972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.030113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.030139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.030224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.030249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.030438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.030490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.030723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.030776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.030962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.031016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.031165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.031192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.031278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.031308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.031397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.031423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.031512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.031537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.031649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.031674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.031834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.031892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.032053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.032110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.032224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.032249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.032365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.032392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.032499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.032524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.032615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.032640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.032723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.032748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.032841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.032866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.032978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.033006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.033108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.033134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.033244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.033269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.033401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.033428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.033517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.033542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.033655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.269 [2024-11-26 18:27:22.033680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.269 qpair failed and we were unable to recover it. 00:31:34.269 [2024-11-26 18:27:22.033766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.033791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.033882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.033908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.034020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.034048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.034182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.034209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.034350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.034377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.034498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.034524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.034602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.034628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.034709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.034735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.034823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.034848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.034936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.034961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.035071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.035115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.035241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.035269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.035364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.035390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.035482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.035508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.035624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.035648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.035783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.035810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.035959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.035988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.036086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.036112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.036230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.036257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.036405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.036432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.036522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.036548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.036659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.036686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.036796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.036823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.036938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.036965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.037087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.037114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.037225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.037249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.037343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.037368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.037506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.037533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.037673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.037700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.037817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.037844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.037962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.037989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.038114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.038155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.038299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.038335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.038431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.038456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.038572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.038631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.038714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.038739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.038855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.038881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.039029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.039057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.270 qpair failed and we were unable to recover it. 00:31:34.270 [2024-11-26 18:27:22.039194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.270 [2024-11-26 18:27:22.039220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.039313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.039340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.039454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.039482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.039588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.039615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.039702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.039728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.039844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.039873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.039963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.039988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.040073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.040099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.040209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.040233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.040345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.040371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.040483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.040509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.040647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.040674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.040790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.040822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.040898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.040924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.041011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.041038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.041175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.041202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.041315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.041342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.041461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.041488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.041567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.041593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.041707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.041732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.041816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.041843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.041983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.042010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.042149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.042175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.042264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.042292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.042433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.042488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.042580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.042606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.042696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.042722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.042808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.042833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.042988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.043028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.043157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.043185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.043275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.043300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.043425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.043450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.043562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.043589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.043668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.043694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.043864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.043917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.044058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.044085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.044220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.044247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.044392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.044421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.271 qpair failed and we were unable to recover it. 00:31:34.271 [2024-11-26 18:27:22.044567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.271 [2024-11-26 18:27:22.044595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.044691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.044717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.044797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.044822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.044997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.045050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.045162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.045188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.045295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.045330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.045421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.045448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.045531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.045556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.045667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.045694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.045831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.045858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.045941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.045967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.046072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.046097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.046189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.046214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.046315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.046340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.046465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.046498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.046585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.046611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.046698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.046725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.046799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.046826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.046935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.046961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.047100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.047126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.047247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.047274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.047395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.047422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.047531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.047557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.047698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.047724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.047847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.047901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.048011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.048038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.048132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.048158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.048244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.048271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.048369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.048396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.048538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.048566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.048644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.048671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.048759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.048786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.048929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.048955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.049066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.049093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.049185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.049213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.049320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.049348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.049437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.272 [2024-11-26 18:27:22.049463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.272 qpair failed and we were unable to recover it. 00:31:34.272 [2024-11-26 18:27:22.049580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.049607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.049722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.049748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.049862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.050031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.050058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.050145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.050174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.050287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.050321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.050513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.050570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.050739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.050807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.051032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.051084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.051231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.051258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.051377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.051404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.051520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.051547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.051640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.051666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.051809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.051835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.051922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.051948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.052059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.052085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.052171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.052198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.052315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.052346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.052433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.052460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.052568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.052594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.052731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.052758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.052874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.052900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.053038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.053079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.053174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.053202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.053324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.053352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.053443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.053471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.053568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.053595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.053743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.053770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.053970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.054036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.054148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.054175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.054267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.054294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.054401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.054428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.054517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.054544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.054663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.273 [2024-11-26 18:27:22.054689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.273 qpair failed and we were unable to recover it. 00:31:34.273 [2024-11-26 18:27:22.054807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.054836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.054950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.054976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.055088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.055114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.055205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.055231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.055338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.055365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.055470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.055496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.055585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.055613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.055731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.055758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.055852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.055878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.055966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.055994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.056112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.056139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.056231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.056259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.056455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.056511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.056667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.056693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.056831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.056857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.056965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.056991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.057073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.057099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.057179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.057205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.057290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.057333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.057476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.057503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.057582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.057609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.057745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.057771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.057865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.057892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.057990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.058021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.058134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.058160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.058277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.058310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.058437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.058463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.058551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.058577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.058717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.058743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.058852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.058879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.058983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.059022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.059115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.059142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.059229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.059258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.059408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.059435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.059551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.059579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.059667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.059694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.059789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.059816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.059908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.274 [2024-11-26 18:27:22.059936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.274 qpair failed and we were unable to recover it. 00:31:34.274 [2024-11-26 18:27:22.060029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.060058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.060150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.060178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.060264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.060291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.060437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.060464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.060551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.060577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.060686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.060713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.060828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.060855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.060936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.060962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.061097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.061123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.061267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.061294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.061386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.061412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.061519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.061546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.061664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.061691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.061794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.061821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.061902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.061928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.062066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.062092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.062212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.062238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.062345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.062372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.062482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.062509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.062611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.062637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.062744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.062770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.062906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.062933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.063079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.063106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.063207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.063233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.063324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.063352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.063480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.063525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.063652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.063681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.063825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.063853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.063969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.063997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.064073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.064101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.064209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.064236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.064360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.064389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.064510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.064536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.064645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.064672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.064811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.064838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.064978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.065004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.065095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.065121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.065210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.065237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.275 qpair failed and we were unable to recover it. 00:31:34.275 [2024-11-26 18:27:22.065373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.275 [2024-11-26 18:27:22.065400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.065516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.065543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.065629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.065656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.065759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.065786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.065901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.065928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.066022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.066049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.066160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.066186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.066297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.066330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.066472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.066499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.066640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.066667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.066755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.066782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.066897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.066924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.067037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.067064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.067150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.067179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.067273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.067322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.067422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.067450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.067641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.067668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.067760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.067789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.067899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.067926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.068046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.068073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.068185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.068212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.068292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.068344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.068433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.068463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.068545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.068572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.068690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.068717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.068801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.068828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.068911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.068940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.069030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.069061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.069175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.069202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.069323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.069350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.069512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.069567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.069673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.069699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.069793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.069821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.069938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.069965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.070083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.070109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.070196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.070224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.070337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.070364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.070448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.070475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.276 qpair failed and we were unable to recover it. 00:31:34.276 [2024-11-26 18:27:22.070579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.276 [2024-11-26 18:27:22.070605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.070683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.070709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.070847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.070874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.070976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.071005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.071120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.071147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.071230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.071258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.071373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.071400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.071488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.071514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.071625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.071652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.071769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.071796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.071883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.071910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.071992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.072019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.072103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.072129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.072236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.072262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.072386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.072413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.072522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.072549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.072659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.072686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.072824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.072851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.072962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.072989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.073064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.073091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.073177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.073204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.073321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.073349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.073460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.073487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.073575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.073602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.073724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.073750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.073864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.073891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.074001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.074028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.074142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.074169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.074282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.074314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.074408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.074439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.074520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.074547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.074662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.074689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.074834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.074861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.074946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.074973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.075086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.075112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.075186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.075213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.075339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.075380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.075486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.075514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.277 qpair failed and we were unable to recover it. 00:31:34.277 [2024-11-26 18:27:22.075609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.277 [2024-11-26 18:27:22.075638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.075783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.075810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.075924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.075951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.076091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.076117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.076231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.076259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.076414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.076442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.076524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.076551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.076766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.076829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.077011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.077038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.077132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.077158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.077287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.077345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.077442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.077471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.077611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.077638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.077753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.077780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.077918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.077972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.078056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.078082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.078171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.078201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.078321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.078349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.078477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.078515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.078667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.078740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.079002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.079066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.079250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.079276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.079367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.079394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.079509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.079535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.079647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.079674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.079809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.079835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.079976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.080038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.080201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.080226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.080362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.080403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.080498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.080528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.080691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.080748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.080965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.081024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.081142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.081168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.081265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.081292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.081418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.278 [2024-11-26 18:27:22.081445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.278 qpair failed and we were unable to recover it. 00:31:34.278 [2024-11-26 18:27:22.081557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.081591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.081667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.081692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.081771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.081796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.081904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.081972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.082238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.082317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.082466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.082492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.082577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.082648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.082905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.082970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.083128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.083157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.083278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.083309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.083400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.083432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.083549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.083576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.083722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.083776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.083887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.083915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.084031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.084059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.084155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.084181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.084299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.084333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.084442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.084468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.084579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.084605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.084695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.084721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.084809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.084837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.084947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.084973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.085169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.085263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.085291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.085417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.085446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.085557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.085584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.085670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.085697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.085780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.085807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.085886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.085914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.086030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.086058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.086167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.086193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.086311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.086338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.086451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.279 [2024-11-26 18:27:22.086477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.279 qpair failed and we were unable to recover it. 00:31:34.279 [2024-11-26 18:27:22.086611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.086637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.086720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.086747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.086881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.086952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.087133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.087180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.087267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.087296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.087426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.087455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.087555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.087582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.087666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.087693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.087774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.087800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.087893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.087928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.088108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.088180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.088309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.088338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.088457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.088485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.088577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.088603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.088687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.088713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.088829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.088857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.088969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.088998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.089094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.089123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.089250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.089277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.089418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.089445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.089563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.089591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.089740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.089767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.089850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.089878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.089961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.089989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.090127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.090154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.090240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.090269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.090370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.090399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.090509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.090581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.090763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.090815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.090899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.090926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.091049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.091076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.091163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.091190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.091332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.091360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.091450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.091476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.091594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.091620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.091707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.091734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.091846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.091873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.091994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.280 [2024-11-26 18:27:22.092020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.280 qpair failed and we were unable to recover it. 00:31:34.280 [2024-11-26 18:27:22.092121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.092162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.092309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.092337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.092435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.092461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.092580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.092607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.092701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.092728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.092820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.092846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.092941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.092974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.093061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.093087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.093251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.093291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.093447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.093539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.093565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.093751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.093817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.094021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.094084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.094351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.094379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.094493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.094519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.094765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.094829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.095082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.095150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.095343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.095373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.095487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.095514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.095607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.095633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.095719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.095746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.095861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.095888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.096033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.096059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.096176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.096203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.096321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.096351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.096472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.096498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.096583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.096610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.096720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.096747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.096825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.096851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.096944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.096972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.097084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.097111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.097204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.097231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.097344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.097371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.097488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.097520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.097658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.097684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.097794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.097822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.097913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.097941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.098027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.281 [2024-11-26 18:27:22.098053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.281 qpair failed and we were unable to recover it. 00:31:34.281 [2024-11-26 18:27:22.098147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.098173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.098263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.098290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.098405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.098432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.098514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.098540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.098624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.098652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.098764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.098790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.098870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.098897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.099011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.099038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.099144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.099170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.099257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.099285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.099374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.099401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.099538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.099563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.099672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.099736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.099884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.099941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.100079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.100158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.100295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.100329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.100491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.100546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.100661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.100715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.100908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.100965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.101080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.101107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.101195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.101221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.101300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.101338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.101425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.101453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.101601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.101628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.101714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.101740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.101832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.101859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.101962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.101988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.102100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.102126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.102208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.102234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.102313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.102340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.102444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.102470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.102586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.102613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.102713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.102741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.102833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.282 [2024-11-26 18:27:22.102861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.282 qpair failed and we were unable to recover it. 00:31:34.282 [2024-11-26 18:27:22.102957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.102986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.103099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.103131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.103250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.103276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.103371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.103399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.103546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.103572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.103662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.103688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.103802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.103828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.103919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.103947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.104041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.104068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.104184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.104212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.104322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.104350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.104461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.104487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.104567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.104594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.104678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.104704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.104816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.104842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.104947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.104974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.105112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.105139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.105252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.105280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.105402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.105430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.105576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.105604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.105717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.105744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.105857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.105883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.106030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.106056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.106198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.106225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.106342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.106370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.106482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.106521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.106687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.106726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.106844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.106872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.107012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.107067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.107177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.107204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.107316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.107342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.107481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.107534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.107617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.107643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.107749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.107776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.107893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.107919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.108028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.108054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.108137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.108163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.108268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.283 [2024-11-26 18:27:22.108314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.283 qpair failed and we were unable to recover it. 00:31:34.283 [2024-11-26 18:27:22.108402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.108430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.108538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.108565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.108683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.108709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.108841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.108867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.108970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.108997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.109114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.109143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.109227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.109253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.109364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.109392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.109476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.109502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.109586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.109613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.109718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.109744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.109875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.109903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.110023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.110049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.110130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.110156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.110268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.110294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.110385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.110411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.110527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.110553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.110697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.110763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.110963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.111027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.111219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.111259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.111364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.111393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.111511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.111538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.111654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.111680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.111823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.111849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.111994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.112020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.112109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.112136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.112243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.112270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.112395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.112422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.112535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.112561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.112697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.112723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.112839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.112873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.112992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.113019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.113102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.113129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.113268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.113294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.113406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.113432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.113513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.113540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.113619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.113646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.113787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.284 [2024-11-26 18:27:22.113814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.284 qpair failed and we were unable to recover it. 00:31:34.284 [2024-11-26 18:27:22.113928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.113955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.114093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.114119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.114232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.114260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.114424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.114465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.114613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.114642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.114752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.114779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.114982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.115042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.115123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.115150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.115269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.115295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.115399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.115426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.115512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.115538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.115646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.115673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.115806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.115833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.115950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.115977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.116099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.116125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.116214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.116241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.116366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.116406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.116548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.116576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.116808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.116873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.117085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.117113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.117203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.117229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.117337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.117364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.117454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.117480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.117576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.117601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.117780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.117849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.117938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.117966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.118088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.118114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.118240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.118279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.118435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.118464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.118584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.118611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.118729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.118784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.118869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.118896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.119017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.119049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.119134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.119162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.119251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.119278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.119392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.119431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.119555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.119583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.119693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.119719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.285 qpair failed and we were unable to recover it. 00:31:34.285 [2024-11-26 18:27:22.119859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.285 [2024-11-26 18:27:22.119885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.119997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.120026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.120123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.120162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.120277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.120309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.120408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.120436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.120583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.120645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.120770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.120815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.121030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.121089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.121184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.121212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.121328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.121368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.121467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.121495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.121654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.121720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.121964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.122030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.122285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.122321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.122416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.122442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.122564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.122590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.122699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.122725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.122863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.122890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.123122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.123186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.123443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.123469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.123648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.123712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.123952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.124030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.124262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.124343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.124508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.124535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.124653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.124681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.124793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.124849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.125010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.125076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.125242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.125269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.125407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.125434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.125530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.125557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.125677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.125704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.125815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.125842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.125954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.125980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.126259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.126343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.126478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.126504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.126674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.126738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.127062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.127135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.127377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.286 [2024-11-26 18:27:22.127404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.286 qpair failed and we were unable to recover it. 00:31:34.286 [2024-11-26 18:27:22.127524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.127550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.127670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.127696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.127788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.127815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.127909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.127976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.128247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.128273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.128408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.128434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.128574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.128607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.128697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.128723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.128811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.128837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.129026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.129091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.129297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.129330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.129415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.129442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.129535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.129561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.129684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.129710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.129791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.129817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.129906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.129933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.130089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.130161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.130456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.130496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.130637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.130677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.130763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.130791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.130964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.131017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.131129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.131156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.131264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.131309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.131455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.131487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.131578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.131616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.131703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.131730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.131864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.131891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.132030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.132057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.132169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.132195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.132336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.132365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.132472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.132511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.132619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.132647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.132785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.132812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.132978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.287 [2024-11-26 18:27:22.133005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.287 qpair failed and we were unable to recover it. 00:31:34.287 [2024-11-26 18:27:22.133248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.133336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.133477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.133504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.133623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.133687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.133882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.133962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.134128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.134157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.134276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.134320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.134429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.134456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.134546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.134573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.134753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.134804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.134885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.134912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.135048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.135074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.135189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.135217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.135341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.135368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.135491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.135518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.135672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.135698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.135785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.135811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.135949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.136002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.136112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.136138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.136254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.136281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.136407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.136433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.136518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.136544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.136632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.136659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.136772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.136800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.136889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.136915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.137010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.137036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.137153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.137179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.137259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.137286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.137419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.137446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.137562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.137588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.137731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.288 [2024-11-26 18:27:22.137762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.288 qpair failed and we were unable to recover it. 00:31:34.288 [2024-11-26 18:27:22.137878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.137905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.138019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.138045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.138125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.138151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.138230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.138257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.138363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.138390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.138479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.138505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.138651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.138677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.138788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.138815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.138951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.138977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.139068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.139096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.139177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.139204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.139319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.139347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.139435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.139461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.139591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.139631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.139758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.139785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.139894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.139921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.140037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.140064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.140154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.140180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.140258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.140285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.140488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.140514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.140750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.140818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.141024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.141089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.141260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.141286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.141416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.141442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.141557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.141583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.141827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.141890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.142143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.142221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.142461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.142488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.142582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.142618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.142739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.142765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.142996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.143022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.143218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.143243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.143394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.143422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.143537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.143565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.143852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.143878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.143993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.144019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.144205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.144270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.144473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.144513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.144703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.144764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.144953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.145009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.145110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.145139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.145275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.145326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.145480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.145507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.145657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.145711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.145906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.145933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.146145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.146198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.146318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.146346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.146465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.146491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.146585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.146621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.146737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.146763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.146912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.146938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.147036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.147063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.147179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.147206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.147291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.147332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.147472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.147498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.147739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.147804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.148079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.148143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.148388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.148415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.148509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.148536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.148683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.148733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.148952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.149016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.149201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.149228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.149352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.149380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.149471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.149497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.149642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.149707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.149996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.150059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.289 [2024-11-26 18:27:22.150261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.289 [2024-11-26 18:27:22.150440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.289 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.150588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.150627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.150856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.150913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.151025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.151094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.151198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.151225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.151340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.151368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.151484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.151510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.151644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.151671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.151787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.151813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.151923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.151949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.152033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.152059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.152194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.152220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.152316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.152343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.152453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.152479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.152594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.152620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.152711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.152741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.152961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.153012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.153121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.153148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.153245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.153271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.153390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.153417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.153533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.153561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.153700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.153748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.153832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.153859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.153971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.153997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.154108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.154134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.154223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.154251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.154350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.154377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.154515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.154541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.154670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.154696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.154810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.154836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.154951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.154977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.155166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.155230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.155445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.155472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.155586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.155666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.155929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.155992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.156247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.156339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.156467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.156493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.156603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.156667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.156959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.157023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.157339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.157385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.157466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.157529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.157807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.157870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.158102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.158166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.158374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.158414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.158535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.158564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.158700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.158765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.158906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.158933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.159148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.159202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.159332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.159359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.159570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.159625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.159709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.159736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.159850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.159911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.160048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.160074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.160161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.160189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.160314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.160341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.160490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.160516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.160612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.160640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.160754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.160781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.160891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.160918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.161043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.161069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.161145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.161171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.161324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.161351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.161509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.161564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.161778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.161832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.161953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.161980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.162095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.162122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.162235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.162262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.162385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.290 [2024-11-26 18:27:22.162412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.290 qpair failed and we were unable to recover it. 00:31:34.290 [2024-11-26 18:27:22.162503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.162535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.162655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.162682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.162816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.162843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.162930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.162958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.163047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.163074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.163185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.163211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.163292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.163328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.163411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.163437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.163521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.163548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.163671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.163697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.163787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.163814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.163892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.163918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.164029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.164056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.164163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.164189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.164299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.164345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.164460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.164488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.164597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.164624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.164738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.164766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.164878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.164905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.164980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.165006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.165122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.165148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.165234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.165260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.165476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.165575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.165883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.165935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.166110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.166161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.166272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.166318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.166491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.166553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.166712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.166764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.166850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.166883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.166999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.167026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.167145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.167171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.167263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.167309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.167569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.167644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.167944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.168018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.168203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.168229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.168341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.168369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.168456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.168482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.168607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.168634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.168723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.168749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.168835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.168861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.169029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.169093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.169315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.169342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.169449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.169475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.169557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.169583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.169723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.169749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.169903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.169966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.170197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.170261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.170456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.170484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.170658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.170722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.170962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.171026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.171290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.171367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.171468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.171493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.171608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.171633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.171737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.171766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.171953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.172029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.172201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.172227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.172318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.172346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.172452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.172478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.172581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.172612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.172691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.172717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.172795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.172821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.172941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.173010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.173198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.173262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.173455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.173481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.173621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.173647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.173762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.173822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.174114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.174178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.174366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.174393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.174503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.174529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.174704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.174769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.174990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.175055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.175288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.175374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.175481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.175507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.291 [2024-11-26 18:27:22.175619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.291 [2024-11-26 18:27:22.175645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.291 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.175756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.175782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.176068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.176131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.176299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.176333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.176438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.176464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.176548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.176611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.176781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.176843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.177098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.177161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.177409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.177440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.177534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.177560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.177674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.177699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.177948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.178011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.178262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.178340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.178461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.178488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.178649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.178716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.178995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.179057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.179256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.179282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.179400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.179427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.179545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.179597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.179848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.179911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.180145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.180209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.180422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.180448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.180564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.180634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.180878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.180942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.181197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.181223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.181328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.181355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.181450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.181483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.181629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.181693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.181873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.181899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.181994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.182020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.182119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.182145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.182258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.182339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.182681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.182753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.182999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.183062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.183330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.183394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.183648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.183712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.184007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.184071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.184354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.184418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.184673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.184737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.184997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.185060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.185335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.185400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.185684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.185709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.185800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.185828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.186015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.186078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.186342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.186409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.186650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.186717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.187014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.187087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.187381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.187408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.187518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.187544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.187716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.187790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.188032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.188096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.188382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.188447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.188702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.188765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.189051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.189115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.189360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.189424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.189676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.189740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.190009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.190035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.190145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.190171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.190375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.190440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.190722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.190797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.191081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.191146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.191401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.191468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.191735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.191799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.192100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.192176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.192433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.192467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.192583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.192617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.192823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.192887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.193098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.193161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.193394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.193459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.193726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.193789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.194081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.194144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.194354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.194419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.194613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.194677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.194966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.195029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.292 [2024-11-26 18:27:22.195278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.292 [2024-11-26 18:27:22.195370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.292 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.195623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.195685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.195919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.195982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.196277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.196368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.196570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.196633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.196910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.196936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.197069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.197095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.197233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.197259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.197472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.197539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.197821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.197886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.198132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.198195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.198466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.198532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.198786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.198849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.199112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.199178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.199473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.199499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.199626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.199652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.199861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.199926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.200190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.200253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.200530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.200594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.200881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.200950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.201249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.201362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.201664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.201728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.201969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.202032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.202279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.202367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.202623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.202687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.202940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.203004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.203286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.203366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.203639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.203711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.203992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.204057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.204344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.204409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.204669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.204733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.204942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.205007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.205285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.205380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.205628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.205699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.205985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.206049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.206341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.206406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.206692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.206754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.207055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.207130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.207391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.207459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.207690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.207754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.207997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.208061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.208348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.208414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.208707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.208770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.209054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.209129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.209423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.209488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.209734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.209797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.210019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.210086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.210372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.210438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.210694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.210759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.211038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.211102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.211357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.211422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.211679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.211704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.211833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.211873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.212052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.212125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.212377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.212404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.212516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.212543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.212673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.212699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.212800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.212827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.212944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.212970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.213209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.213274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.213518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.213543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.213631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.213657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.213806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.213833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.214064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.214091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.214199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.214226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.214413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.214440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.214691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.214756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.215012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.215075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.215349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.215376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.215466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.293 [2024-11-26 18:27:22.215493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.293 qpair failed and we were unable to recover it. 00:31:34.293 [2024-11-26 18:27:22.215609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.215635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.215839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.215903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.216195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.216258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.216411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.216437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.216518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.216544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.216633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.216659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.216796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.216822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.216927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.216953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.217103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.217166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.217396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.217425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.217537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.217564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.217675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.217702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.217782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.217846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.218036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.218107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.218405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbdf30 is same with the state(6) to be set 00:31:34.294 [2024-11-26 18:27:22.218813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.218912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.219168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.219197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.219324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.219353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.219448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.219475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.219781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.219849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.220071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.220136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.220446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.220515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.220779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.220806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.220947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.220973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.221110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.221137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.221337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.221404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.221592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.221656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.221863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.221888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.222028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.222055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.222171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.222198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.222288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.222326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.222465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.222528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.222725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.222798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.223083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.223147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.223402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.223429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.223543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.223569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.223711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.223737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.223895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.223958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.224252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.224335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.224581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.224648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.224948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.225032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.225156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.225183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.225350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.225377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.225453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.225508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.225746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.225808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.225999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.226071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.226209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.226235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.226384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.226410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.226579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.226648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.226890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.226953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.227141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.227204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.227509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.227573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.227784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.227811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.227925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.227952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.228073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.228101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.228210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.228240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.228342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.228369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.228545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.228571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.228712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.228738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.228854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.228880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.228973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.229006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.229111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.229167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.229387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.229451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.229693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.229756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.230037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.230101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.230306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.230332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.294 [2024-11-26 18:27:22.230450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.294 [2024-11-26 18:27:22.230475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.294 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.230594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.230621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.230741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.230767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.230979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.231044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.231243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.231269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.231368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.231394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.231496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.231523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.231663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.231689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.231890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.231954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.232196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.232260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.232517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.232543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.232692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.232718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.232878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.232904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.233017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.233043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.233164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.233190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.233323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.233420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.233730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.233799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.234060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.234088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.234205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.234235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.234336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.234363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.234446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.234473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.234566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.234602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.234721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.234746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.234879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.234944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.235177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.235241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.235558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.235642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.235766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.235791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.235902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.235928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.236044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.236113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.236370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.236434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.236748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.236800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.236899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.236926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.237008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.237035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.237152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.237179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.237269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.237311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.237395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.237423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.237532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.237559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.237658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.237697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.237813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.237840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.237948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.237977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.238112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.238178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.238350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.238376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.238507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.238533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.238647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.238678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.238789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.295 [2024-11-26 18:27:22.238814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.295 qpair failed and we were unable to recover it. 00:31:34.295 [2024-11-26 18:27:22.238936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.238999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.239236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.239326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.239609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.239672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.239933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.239960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.240067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.240095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.240206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.240243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.240318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.240345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.240426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.240452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.240629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.240693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.240974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.241038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.241362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.241428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.241684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.241711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.241809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.241835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.241946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.241983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.242093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.242149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.242398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.242463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.242752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.242816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.243018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.243077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.243308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.243335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.243473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.243499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.243613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.243640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.243755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.243781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.243862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.243889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.244000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.244059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.244272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.244353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.244516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.244543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.244672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.244700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.244795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.244820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.244904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.244931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.245116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.245175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.245411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.245470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.245699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.245759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.246003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.246067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.296 qpair failed and we were unable to recover it. 00:31:34.296 [2024-11-26 18:27:22.246183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.296 [2024-11-26 18:27:22.246208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.246336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.246373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.246456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.246523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.246757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.246843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.247069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.247128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.247384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.247466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.247653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.247721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.247937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.247996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.248268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.248339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.248568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.248646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.248887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.248947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.249217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.249281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.249550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.249612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.249826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.249885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.250115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.250173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.250429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.250510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.250712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.250790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.251026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.251085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.251300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.251373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.251699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.251778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.252073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.252151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.252347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.252410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.252629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.252712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.252938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.253015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.253197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.253258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.253588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.253681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.253979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.254057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.577 qpair failed and we were unable to recover it. 00:31:34.577 [2024-11-26 18:27:22.254284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.577 [2024-11-26 18:27:22.254362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.254641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.254715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.254948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.255025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.255233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.255291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.255532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.255617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.255906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.255968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.256187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.256246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.256544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.256621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.256907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.256985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.257181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.257240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.257611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.257679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.257886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.257964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.258179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.258250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.258474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.258507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.258689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.258723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.258855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.258890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.259031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.259065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.259179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.259212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.259391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.259426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.259599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.259641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.259789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.259847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.260075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.260137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.260393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.260425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.260568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.260611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.260807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.260896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.261091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.261151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.261359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.261392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.261554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.261620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.261895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.261953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.262179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.262238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.262487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.262530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.262769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.262819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.263077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.263137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.263368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.263401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.263566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.263639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.263882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.263961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.264163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.264222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.264404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.264437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.264571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.264611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.264734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.264806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.265039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.265098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.265259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.265376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.265522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.265555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.265779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.265837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.266084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.266143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.266367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.266401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.266566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.266636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.266923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.267020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.267248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.267349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.267482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.267515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.267654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.267693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.267891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.267969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.268246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.268278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.268431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.268463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.268613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.268664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.268890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.268967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.269221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.269280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.269488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.269521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.269692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.269751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.269975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.270007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.270116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.270168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.270425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.270458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.270610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.270649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.270797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.578 [2024-11-26 18:27:22.270871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.578 qpair failed and we were unable to recover it. 00:31:34.578 [2024-11-26 18:27:22.271059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.271117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.271282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.271336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.271557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.271626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.271845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.271910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.272109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.272150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.272314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.272356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.272616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.272675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.272921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.272962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.273112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.273171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.273415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.273493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.273765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.273808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.274056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.274117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.274420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.274505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.274717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.274760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.275012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.275070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.275342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.275403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.275620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.275671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.275854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.275934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.276140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.276200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.276409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.276453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.276645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.276743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.276955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.277035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.277260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.277342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.277599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.277676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.277910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.277980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.278230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.278277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.278534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.278579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.278764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.278810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.279020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.279076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.279336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.279399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.279651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.279739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.279979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.280028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.280247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.280331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.280602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.280680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.280883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.280932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.281123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.281183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.281412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.281491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.281744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.281793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.282033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.282110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.282330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.282401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.282602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.282650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.282866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.282943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.283154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.283212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.283513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.283574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.283793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.283852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.284027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.284085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.284342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.284396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.284616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.284694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.284917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.284975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.285213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.285264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.285557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.285635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.285838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.285925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.286204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.286255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.286491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.286569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.286831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.286907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.287168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.287219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.287440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.287518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.287822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.287910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.288174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.288229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.288499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.288561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.288795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.288871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.289105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.289159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.289396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.289457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.289726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.289801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.290021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.579 [2024-11-26 18:27:22.290077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.579 qpair failed and we were unable to recover it. 00:31:34.579 [2024-11-26 18:27:22.290358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.290418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.290675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.290752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.291024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.291088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.291334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.291394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.291633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.291711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.292005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.292065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.292278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.292360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.292613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.292695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.292938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.293006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.293233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.293291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.293528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.293617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.293881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.293951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.294180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.294240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.294568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.294648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.294889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.294951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.295168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.295228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.295552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.295617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.295853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.295923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.296127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.296186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.296478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.296557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.296834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.296913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.297147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.297208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.297511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.297589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.297795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.297872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.298079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.298141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.298437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.298525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.298792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.298868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.299102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.299172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.299474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.299562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.299861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.299940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.300202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.300262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.300524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.300601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.300865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.300940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.301205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.301264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.301560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.301641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.301943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.302035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.302264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.302352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.302615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.302692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.302987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.303073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.303341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.303403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.303621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.303702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.303998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.304077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.304314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.304374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.304623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.304701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.304921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.305001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.305229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.305290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.305582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.305670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.305931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.306006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.306240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.306315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.306496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.306556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.306814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.306892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.307176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.307264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.307533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.307624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.307896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.307974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.308174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.308249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.308568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.308647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.308921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.308979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.580 [2024-11-26 18:27:22.309247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.580 [2024-11-26 18:27:22.309329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.580 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.309574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.309654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.309885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.309963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.310162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.310223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.310530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.310619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.310874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.310951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.311137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.311196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.311461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.311540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.311836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.311913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.312106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.312168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.312477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.312556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.312848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.312927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.313204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.313264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.313522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.313611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.313819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.313898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.314126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.314185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.314461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.314540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.314833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.314914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.315192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.315251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.315528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.315607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.315864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.315942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.316152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.316213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.316495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.316585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.316882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.316958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.317236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.317295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.317604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.317682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.317963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.318048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.318288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.318365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.318605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.318681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.318948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.319026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.319283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.319359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.319596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.319674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.319903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.319980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.320241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.320300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.320584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.320666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.320930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.321013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.321232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.321292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.321612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.321697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.321957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.322045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.322239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.322322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.322589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.322667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.322968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.323044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.323251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.323327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.323588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.323667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.323886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.323963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.324195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.324255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.324508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.324588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.324849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.324918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.325183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.325242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.325566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.325655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.325951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.326039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.326285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.326362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.326635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.326711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.326981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.327042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.327322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.327384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.327639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.327714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.328025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.328101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.328338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.328400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.328634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.328731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.329004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.329092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.329367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.329428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.329717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.329804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.330068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.330143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.330358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.330438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.581 [2024-11-26 18:27:22.330743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.581 [2024-11-26 18:27:22.330824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.581 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.331117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.331202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.331466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.331526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.331820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.331896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.332139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.332198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.332476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.332554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.332823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.332901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.333136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.333201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.333461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.333539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.333776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.333853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.334063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.334123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.334402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.334483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.334776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.334862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.335091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.335150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.335441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.335526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.335800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.335879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.336093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.336153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.336432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.336495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.336785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.336862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.337149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.337208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.337506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.337589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.337880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.337966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.338229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.338289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.338526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.338613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.338863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.338940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.339199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.339259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.339570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.339650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.339907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.339983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.340203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.340262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.340598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.340688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.340984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.341066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.341297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.341374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.341631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.341709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.341945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.342023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.342239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.342324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.342620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.342706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.342952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.343028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.343330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.343391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.343653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.343730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.344020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.344107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.344337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.344398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.344659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.344735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.345012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.345098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.345389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.345467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.345761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.345848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.346107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.346184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.346438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.346531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.346790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.346876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.347085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.347145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.347401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.347481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.347788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.347867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.348072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.348135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.348339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.348400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.348634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.348724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.348950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.349009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.349300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.349371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.349639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.349719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.349972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.350048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.350277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.350349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.350604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.350685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.350978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.351055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.351328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.351388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.351649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.351726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.351948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.352045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.582 [2024-11-26 18:27:22.352321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.582 [2024-11-26 18:27:22.352383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.582 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.352647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.352725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.352988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.353078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.353336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.353397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.353650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.353727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.353993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.354078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.354325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.354415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.354661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.354739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.354983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.355060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.355333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.355395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.355700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.355779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.356002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.356078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.356344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.356405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.356655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.356732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.357021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.357108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.357402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.357462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.357717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.357793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.358050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.358129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.358388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.358469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.358699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.358787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.359081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.359157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.359412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.359490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.359792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.359878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.360160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.360219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.360474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.360553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.360798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.360875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.361148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.361208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.361486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.361575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.361868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.361956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.362152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.362211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.362491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.362839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.362900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.363162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.363222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.363546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.363635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.363932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.364010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.364257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.364331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.364578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.364657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.364941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.365033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.365270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.365361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.365635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.365711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.365969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.366046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.366334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.366394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.366686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.366771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.367043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.367122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.367357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.367419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.367720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.367811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.368105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.368194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.368535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.368816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.368894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.369191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.369268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.369560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.369620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.369911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.369989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.370260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.370329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.370602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.370679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.370939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.371017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.371191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.371249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.371516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.371604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.371872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.371949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.372137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.372194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.372460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.372539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.583 [2024-11-26 18:27:22.372803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.583 [2024-11-26 18:27:22.372889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.583 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.373158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.373217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.373554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.373633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.373953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.374030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.374270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.374348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.374593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.374671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.374976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.375056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.375338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.375398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.375618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.375694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.375932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.375992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.376233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.376292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.376563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.376648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.376932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.377019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.377243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.377327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.377610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.377695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.377914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.377993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.378204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.378264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.378489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.378566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.378857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.378935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.379140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.379200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.379440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.379521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.379806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.379882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.380118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.380177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.380609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.380687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.380982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.381059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.381290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.381362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.381666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.381736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.382004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.382081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.382357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.382418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.382719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.382795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.383059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.383135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.383349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.383410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.383633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.383709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.383946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.384024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.384256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.384334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.384605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.384700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.384929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.385005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.385221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.385280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.385612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.385699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.386006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.386083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.386319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.386380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.386658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.386736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.386970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.387045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.387331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.387392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.387691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.387774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.388038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.388115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.388344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.388404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.388625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.388703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.388993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.389070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.389299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.389368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.389659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.389745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.390003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.390079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.390354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.390414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.390707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.390799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.391093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.391180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.391420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.391482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.391748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.391836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.392071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.392148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.392413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.392491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.392773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.392832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.393121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.393198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.393502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.393582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.393850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.393929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.394156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.394224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.584 [2024-11-26 18:27:22.394499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.584 [2024-11-26 18:27:22.394577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.584 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.394846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.394906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.395133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.395191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.395513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.395602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.395866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.395944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.396203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.396262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.396558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.396618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.396889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.396968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.397199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.397265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.397582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.397668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.397978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.398065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.398329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.398389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.398653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.398716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.399021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.399096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.399369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.399430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.399690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.399767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.400020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.400100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.400375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.400453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.400767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.400855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.401121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.401180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.401469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.401547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.401836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.401914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.402196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.402255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.402606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.402677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.402919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.402998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.403210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.403277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.403534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.403620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.403856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.403933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.404197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.404256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.404476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.404556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.404818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.404894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.405127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.405197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.405459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.405536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.405833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.405919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.406158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.406217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.406468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.406545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.406801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.406878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.407149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.407208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.407474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.407551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.407820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.407898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.408131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.408189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.408455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.408533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.408798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.408874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.409154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.409213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.409489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.409577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.409858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.409936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.410161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.410220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.410493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.410571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.410815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.410900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.411167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.411225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.411494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.411571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.411818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.411896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.412179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.412237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.412522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.412602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.412870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.412947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.413176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.413234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.413545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.413635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.413935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.414012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.414286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.414369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.585 [2024-11-26 18:27:22.414598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.585 [2024-11-26 18:27:22.414675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.585 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.414962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.415051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.415320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.415387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.415628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.415704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.415986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.416073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.416318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.416378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.416645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.416704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.416904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.416985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.417235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.417293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.417566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.417652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.417949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.418036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.418261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.418337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.418603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.418681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.418951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.419030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.419300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.419375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.419651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.419712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.420021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.420097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.420361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.420422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.420733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.420810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.421061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.421137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.421428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.421506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.421750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.421826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.422114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.422201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.422457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.422528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.422796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.422881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.423146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.423206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.423430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.423508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.423821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.423908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.424171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.424230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.424561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.424646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.424884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.424945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.425130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.425189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.425447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.425526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.425838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.425922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.426195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.426255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.426585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.426672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.426974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.427051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.427264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.427337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.427557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.427635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.427860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.427940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.428159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.428233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.428556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.428639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.428947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.429025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.429263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.429354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.429571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.429660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.429882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.429941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.430176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.430235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.430545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.430623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.430921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.430997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.431270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.431346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.431609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.431697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.431966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.432047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.432333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.432395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.432659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.432736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.433008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.433086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.433389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.433460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.433748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.433836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.434109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.434169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.434434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.434495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.434791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.434879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.435143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.435220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.435531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.435613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.435915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.435993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.436258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.436328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.436616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.436692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.586 [2024-11-26 18:27:22.436967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.586 [2024-11-26 18:27:22.437046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.586 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.437278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.437355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.437616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.437709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.437958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.438037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.438278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.438365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.438612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.438690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.439001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.439088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.439336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.439397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.439709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.439786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.440065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.440135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.440371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.440431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.440672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.440748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.441047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.441134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.441428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.441506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.441814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.441890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.442146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.442206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.442423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.442500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.442809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.442890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.443126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.443185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.443370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.443430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.443690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.443766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.444027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.444104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.444370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.444451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.444756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.444844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.445078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.445144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.445399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.445478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.445765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.445842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.446068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.446140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.446378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.446439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.446729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.446799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.447055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.447115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.447346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.447407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.447713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.447802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.448072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.448131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.448420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.448499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.448802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.448885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.449120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.449192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.449454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.449533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.449827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.449904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.450143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.450213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.450444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.450522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.450755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.450834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.451110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.451169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.451452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.451540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.451809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.451897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.452166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.452225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.452477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.452555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.452831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.452917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.453133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.453192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.453460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.453519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.453789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.453848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.454131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.454190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.454436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.454514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.454808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.454892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.455078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.455139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.455394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.455475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.455679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.455757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.456016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.456076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.456338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.456399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.456688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.456764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.457062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.457139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.457360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.457420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.457696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.457761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.458040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.458117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.458362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.458442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.587 [2024-11-26 18:27:22.458697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.587 [2024-11-26 18:27:22.458776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.587 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.459045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.459111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.459414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.459493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.459732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.459821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.460096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.460155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.460376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.460465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.460680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.460768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.461046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.461115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.461317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.461377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.461636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.461716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.462014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.462092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.462293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.462403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.462665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.462749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.463008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.463092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.463292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.463386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.463693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.463770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.464040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.464117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.464395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.464475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.464740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.464816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.465022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.465082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.465300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.465372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.465643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.465725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.465981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.466059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.466281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.466353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.466636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.466712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.466971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.467048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.467322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.467382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.467637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.467714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.467982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.468059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.468290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.468366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.468659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.468744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.468998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.469057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.469274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.469347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.469646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.469707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.470007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.470084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.470351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.470411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.470641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.470720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.470978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.471057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.471292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.471364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.471585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.471645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.471876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.471963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.472246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.472325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.472549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.472607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.472889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.472966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.473240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.473318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.473586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.473646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.473838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.473928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.474170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.474229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.474507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.474585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.474845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.474931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.475164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.475223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.475463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.475542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.475798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.475882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.476165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.476224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.476539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.476624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.476892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.476968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.477232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.477291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.477496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.477556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.477835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.477914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.478150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.478216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.478477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.588 [2024-11-26 18:27:22.478559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.588 qpair failed and we were unable to recover it. 00:31:34.588 [2024-11-26 18:27:22.478825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.478910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.479144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.479209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.479476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.479554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.479832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.479908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.480181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.480239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.480546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.480636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.480943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.481020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.481258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.481334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.481595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.481672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.481937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.482013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.482230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.482291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.482624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.482702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.482954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.483030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.483320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.483381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.483700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.483777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.484046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.484121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.484365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.484426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.484630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.484709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.484999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.485087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.485362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.485423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.485671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.485752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.486015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.486093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.486291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.486377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.486633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.486719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.487009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.487084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.487371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.487451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.487726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.487803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.488056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.488134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.488381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.488465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.488763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.488853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.489123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.489183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.489424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.489504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.489702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.489787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.490002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.490062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.490282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.490370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.490688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.490769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.490987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.491066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.491340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.491401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.491608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.491697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.492021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.492098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.492340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.492400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.492606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.492683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.493001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.493071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.493318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.493377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.493648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.493725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.494019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.494094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.494389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.494477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.494777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.494854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.495108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.495195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.495461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.495539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.495847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.495934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.496201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.496260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.496535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.496619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.496893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.496985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.497212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.497272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.497499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.497581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.497820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.497897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.498125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.498185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.498476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.498564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.498822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.498900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.499171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.499230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.499453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.499536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.499793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.499871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.500091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.589 [2024-11-26 18:27:22.500149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.589 qpair failed and we were unable to recover it. 00:31:34.589 [2024-11-26 18:27:22.500405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.500489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.500707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.500784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.501063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.501122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.501409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.501470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.501715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.501774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.501988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.502048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.502289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.502365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.502600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.502659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.502892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.502950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.503182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.503243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.503531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.503591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.503863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.503941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.504210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.504269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.504571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.504660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.504957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.505035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.505256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.505331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.505622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.505710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.505986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.506063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.506299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.506389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.506621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.506681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.506936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.507012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.507273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.507349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.507654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.507729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.507989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.508066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.508276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.508351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.508624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.508692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.508965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.509042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.509332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.509393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.509611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.509670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.509942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.510019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.510254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.510340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.510574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.510633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.510890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.510968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.511201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.511264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.511518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.511578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.511867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.511946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.512147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.512210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.512485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.512563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.512815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.512891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.513157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.513217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.513499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.513578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.513885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.513961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.514223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.514283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.514565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.514644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.514855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.514932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.515193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.515252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.515520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.515599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.515855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.515937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.516149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.516208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.516508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.516586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.516859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.516935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.517200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.517260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.517547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.517625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.517935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.518013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.518188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.518247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.518516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.518594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.518780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.518838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.518999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.519067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.519326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.519386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.519630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.519716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.519974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.520051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.520233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.520292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.520574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.520659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.520916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.520992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.521205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.590 [2024-11-26 18:27:22.521264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.590 qpair failed and we were unable to recover it. 00:31:34.590 [2024-11-26 18:27:22.521538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.521624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.521898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.521975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.522199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.522257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.522507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.522585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.522849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.522927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.523145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.523203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.523495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.523557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.523806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.523885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.524159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.524228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.524507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.524594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.524888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.524965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.525201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.525263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.525584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.525671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.525974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.526052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.526294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.526369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.526622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.526706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.527008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.527096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.527421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.527481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.527779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.527857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.528154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.528231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.528499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.528576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.528866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.528953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.529204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.529263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.529470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.529548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.529782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.529861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.530135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.530211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.530485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.530547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.530861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.530937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.531144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.531203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.531474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.531552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.531788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.531871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.532103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.532162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.532430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.532510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.532719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.532827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.533039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.533100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.533326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.533387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.533674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.533750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.534043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.534120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.534317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.534378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.534679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.534756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.534953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.535031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.535243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.535318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.535529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.535613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.535903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.535990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.536265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.536339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.536552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.536630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.536930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.537013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.537248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.537345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.537654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.537733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.538031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.538106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.538365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.538445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.538740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.538816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.539113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.539198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.539525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.539613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.539900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.539982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.540249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.540329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.540607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.540667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.540952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.541037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.541263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.541336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.541643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.591 [2024-11-26 18:27:22.541723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.591 qpair failed and we were unable to recover it. 00:31:34.591 [2024-11-26 18:27:22.541934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.542020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.542286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.542360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.542627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.542703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.543006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.543082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.543324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.543384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.543653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.543712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.544006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.544092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.544347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.544407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.544657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.544734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.544990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.545066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.545295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.545381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.545609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.545685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.545974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.546058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.546327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.546388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.546681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.546740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.546995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.547072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.547342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.547402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.547575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.547645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.547939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.548025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.548318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.548378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.548623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.548682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.548929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.549004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.549243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.549343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.549618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.549678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.549939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.550015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.550232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.550291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.550601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.550687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.550988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.551063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.551326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.551386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.551642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.551721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.551995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.552071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.552330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.552391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.552691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.552771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.553081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.553157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.553389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.553450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.553706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.553795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.554093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.554178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.554459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.554521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.554824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.554911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.555157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.555233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.555535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.555623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.555838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.555925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.556161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.556223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.556543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.556632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.556942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.557018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.557289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.557380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.557687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.557764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.558076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.558154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.558388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.558451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.558757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.558845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.559096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.559174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.559436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.559513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.559763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.559822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.560095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.560157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.560428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.560463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.560609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.560644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.560754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.560789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.561035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.561094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.561298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.561369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.561633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.561711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.561948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.562025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.562262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.562343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.562650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.562728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.562999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.563077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.563332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.592 [2024-11-26 18:27:22.563393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.592 qpair failed and we were unable to recover it. 00:31:34.592 [2024-11-26 18:27:22.563660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.593 [2024-11-26 18:27:22.563736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.593 qpair failed and we were unable to recover it. 00:31:34.593 [2024-11-26 18:27:22.564000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.593 [2024-11-26 18:27:22.564076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.593 qpair failed and we were unable to recover it. 00:31:34.593 [2024-11-26 18:27:22.564331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.593 [2024-11-26 18:27:22.564391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.593 qpair failed and we were unable to recover it. 00:31:34.593 [2024-11-26 18:27:22.564609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.593 [2024-11-26 18:27:22.564717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.593 qpair failed and we were unable to recover it. 00:31:34.593 [2024-11-26 18:27:22.565026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.593 [2024-11-26 18:27:22.565114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.593 qpair failed and we were unable to recover it. 00:31:34.593 [2024-11-26 18:27:22.565364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.593 [2024-11-26 18:27:22.565423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.593 qpair failed and we were unable to recover it. 00:31:34.593 [2024-11-26 18:27:22.565704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.593 [2024-11-26 18:27:22.565781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.593 qpair failed and we were unable to recover it. 00:31:34.593 [2024-11-26 18:27:22.566073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.593 [2024-11-26 18:27:22.566149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.566402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.566481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.566702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.566778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.567076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.567156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.567378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.567413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.567551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.567585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.567722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.567756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.567951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.568009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.568244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.568324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.568508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.568541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.568730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.568789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.569023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.569082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.569270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.569347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.569511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.921 [2024-11-26 18:27:22.569544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.921 qpair failed and we were unable to recover it. 00:31:34.921 [2024-11-26 18:27:22.569720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.569780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.570038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.570095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.570344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.570379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.570492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.570526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.570720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.570779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.571014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.571091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.571358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.571393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.571553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.571586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.571805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.571884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.572071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.572130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.572368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.572418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.572554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.572587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.572807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.572866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.573071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.573130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.573395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.573429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.573535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.573568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.573775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.573810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.573948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.573981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.574152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.574210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.574466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.574501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.574730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.574788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.575018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.575077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.575349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.575401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.575544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.575583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.575790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.575866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.576134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.576168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.922 qpair failed and we were unable to recover it. 00:31:34.922 [2024-11-26 18:27:22.576400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.922 [2024-11-26 18:27:22.576434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.576563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.576608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.576844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.576919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.577192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.577251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.577441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.577475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.577647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.577680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.577918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.577996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.578267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.578337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.578519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.578552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.578777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.578855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.579114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.579173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.579416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.579450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.579616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.579702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.579958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.580036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.580315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.580391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.580511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.580544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.580666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.580736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.580966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.581043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.581352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.581387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.581516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.581550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.581794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.581872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.582138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.582200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.582441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.582519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.582790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.582866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.583096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.583154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.583414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.923 [2024-11-26 18:27:22.583492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.923 qpair failed and we were unable to recover it. 00:31:34.923 [2024-11-26 18:27:22.583736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.583813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.584116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.584194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.584462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.584510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.584736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.584813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.585098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.585186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.585441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.585520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.585805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.585881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.586100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.586159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.586433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.586510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.586792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.586873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.587146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.587205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.587515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.587593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.587852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.587931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.588201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.588261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.588511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.588562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.588796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.588873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.589104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.589163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.589410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.589491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.589799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.589875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.590102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.590171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.590327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.590382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.590656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.590733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.590980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.591055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.591247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.591324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.591572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.591658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.591928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.924 [2024-11-26 18:27:22.592006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.924 qpair failed and we were unable to recover it. 00:31:34.924 [2024-11-26 18:27:22.592217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.592277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.592536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.592593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.592844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.592921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.593130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.593188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.593446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.593525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.593826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.593912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.594139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.594197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.594463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.594541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.594776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.594830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.595083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.595138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.595379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.595459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.595721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.595796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.596028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.596089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.596326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.596416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.596651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.596722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.596992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.597068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.598035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.598070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.598204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.598231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.598337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.598364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.598476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.598538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.598707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.598760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.598916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.598964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.599115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.599143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.599254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.599281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.599458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.599518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.599733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.925 [2024-11-26 18:27:22.599783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.925 qpair failed and we were unable to recover it. 00:31:34.925 [2024-11-26 18:27:22.599951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.600010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.600127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.600155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.600246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.600273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.600451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.600510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.600627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.600685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.600806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.600834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.600975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.601002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.601119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.601146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.601295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.601330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.601417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.601444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.601587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.601648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.601852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.601907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.602052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.602079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.602222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.602249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.602396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.602449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.602626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.602675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.602841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.602899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.603053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.603080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.603186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.603214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.603314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.603341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.603496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.603541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.603671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.603700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.603784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.603812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.603906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.603932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.604031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.604058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.604150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.926 [2024-11-26 18:27:22.604177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.926 qpair failed and we were unable to recover it. 00:31:34.926 [2024-11-26 18:27:22.604259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.604285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.604383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.604412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.604523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.604566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.604665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.604695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.604817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.604846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.604962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.604989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.605108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.605136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.605265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.605315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.605459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.605487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.605633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.605660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.605755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.605782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.605937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.605997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.606248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.606336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.606471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.606498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.606687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.606746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.606996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.607052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.607280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.607362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.607478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.607505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.607609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.607637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.607830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.607886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.608082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.608140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.608399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.608427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.608577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.608643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.608902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.608957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.609165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.609221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.609450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.927 [2024-11-26 18:27:22.609479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.927 qpair failed and we were unable to recover it. 00:31:34.927 [2024-11-26 18:27:22.609567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.609594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.609730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.609759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.609974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.610030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.610221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.610259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.610387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.610416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.610539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.610566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.610741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.610769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.610869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.610896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.611061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.611117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.611314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.611342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.611434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.611461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.611626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.611680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.611873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.611927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.612137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.612190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.612422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.612451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.612553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.612583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.612714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.612748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.612943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.613005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.613243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.613326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.613472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.613499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.928 [2024-11-26 18:27:22.613614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.928 [2024-11-26 18:27:22.613667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.928 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.613844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.613899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.614146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.614204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.614418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.614447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.614578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.614606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.614752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.614823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.614981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.615048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.615262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.615329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.615467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.615496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.615714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.615769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.616015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.616076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.616331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.616379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.616529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.616557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.616649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.616676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.616781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.616809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.616969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.617024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.617239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.617295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.617443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.617472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.617566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.617595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.617755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.617782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.617927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.617954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.618135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.618191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.618354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.618428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.618664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.618720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.618916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.618972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.619153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.929 [2024-11-26 18:27:22.619208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.929 qpair failed and we were unable to recover it. 00:31:34.929 [2024-11-26 18:27:22.619402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.619461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.619713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.619770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.619987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.620041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.620300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.620384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.620634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.620699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.620859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.620913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.621098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.621154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.621318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.621376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.621590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.621654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.621903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.621958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.622214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.622280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.622515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.622571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.622819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.622875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.623072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.623127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.623318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.623375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.623592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.623654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.623834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.623889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.624082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.624137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.624351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.624411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.624665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.624725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.624948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.625005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.625183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.625238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.625447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.625506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.625730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.625785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.626012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.626068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.626286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.930 [2024-11-26 18:27:22.626356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.930 qpair failed and we were unable to recover it. 00:31:34.930 [2024-11-26 18:27:22.626582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.626637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.626847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.626902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.627128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.627183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.627369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.627427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.627682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.627749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.628013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.628069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.628319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.628376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.628553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.628611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.628797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.628854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.629081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.629138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.629415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.629472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.629709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.629765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.629989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.630044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.630296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.630364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.630591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.630647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.630904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.630959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.631175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.631504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.631564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.631804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.631861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.632078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.632134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.632353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.632411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.632622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.632678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.632899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.632954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.633182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.633237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.633515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.633589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.633814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.931 [2024-11-26 18:27:22.633871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.931 qpair failed and we were unable to recover it. 00:31:34.931 [2024-11-26 18:27:22.634056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.634111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.634289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.634359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.634545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.634608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.634783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.634838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.635047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.635103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.635330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.635387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.635600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.635658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.635909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.635965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.636183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.636240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.636539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.636597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.636773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.636829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.637061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.637120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.637400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.637458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.637645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.637701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.637903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.637962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.638216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.638272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.638508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.638567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.638824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.638880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.639127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.639182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.639416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.639473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.639681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.639737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.639976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.640031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.640283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.640352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.640555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.640622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.640846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.932 [2024-11-26 18:27:22.640901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.932 qpair failed and we were unable to recover it. 00:31:34.932 [2024-11-26 18:27:22.641133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.641189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.641446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.641505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.641729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.641785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.642036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.642093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.642328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.642387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.642585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.642641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.642813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.642872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.643121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.643177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.643427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.643483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.643741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.643798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.644012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.644068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.644293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.644366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.644597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.644657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.644892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.644961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.645196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.645256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.645497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.645553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.645770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.645825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.646046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.646102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.646254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.646325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.646544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.646604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.646810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.646869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.647117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.647172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.647442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.647499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.647726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.647782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.647988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.648045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.648204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.933 [2024-11-26 18:27:22.648259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.933 qpair failed and we were unable to recover it. 00:31:34.933 [2024-11-26 18:27:22.648495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.648550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.648753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.648809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.648977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.649033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.649213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.649269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.649544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.649600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.649811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.649869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.650123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.650179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.650433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.650491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.650709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.650764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.650968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.651024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.651198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.651255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.651485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.651544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.651772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.651828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.652100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.652155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.652338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.652396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.652590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.652658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.652880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.652937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.653180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.653236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.653428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.653485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.653715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.653770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.654026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.654081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.934 [2024-11-26 18:27:22.654320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.934 [2024-11-26 18:27:22.654400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.934 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.654600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.654661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.654891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.654951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.655232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.655288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.655529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.655586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.655792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.655850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.656096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.656161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.656409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.656468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.656642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.656700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.656913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.656970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.657183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.657239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.657460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.657517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.657739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.657795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.658012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.658070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.658366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.658428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.658674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.658735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.659024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.659084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.659281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.659357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.659560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.659623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.659834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.659896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.660188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.660248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.660510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.660571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.660796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.660857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.661054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.661124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.661359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.661423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.661668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.661729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.935 qpair failed and we were unable to recover it. 00:31:34.935 [2024-11-26 18:27:22.661995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.935 [2024-11-26 18:27:22.662054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.662323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.662384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.662620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.662680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.662925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.662984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.663204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.663265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.663535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.663603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.663805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.663865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.664111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.664171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.664395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.664457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.664678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.664738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.664969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.665030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.665254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.665328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.665564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.665624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.665865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.665925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.666191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.666250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.666553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.666614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.666817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.666876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.667086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.667146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.667422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.667484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.667759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.667818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.668040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.668110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.668375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.668437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.668627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.668689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.668909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.668969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.669208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.669268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.936 [2024-11-26 18:27:22.669477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.936 [2024-11-26 18:27:22.669541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.936 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.669781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.669841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.670084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.670143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.670375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.670437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.670668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.670730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.671008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.671069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.671293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.671393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.671681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.671741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.671975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.672037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.672288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.672366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.672602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.672662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.672895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.672955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.673186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.673246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.673473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.673534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.673762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.673822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.674025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.674084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.674318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.674380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.674655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.674726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.674961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.675024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.675236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.675297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.675567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.675627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.675853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.675913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.676110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.676173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.676437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.676499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.676718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.676781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.677022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.677083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.677322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.677405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.677698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.677763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.678007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.678074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.678277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.678367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.678632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.678692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.678924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.678985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.679209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.679268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.679524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.679584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.679821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.679885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.937 [2024-11-26 18:27:22.680167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.937 [2024-11-26 18:27:22.680237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.937 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.680519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.680581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.680787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.680850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.681137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.681198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.681463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.681525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.681773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.681833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.682105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.682168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.682407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.682469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.682744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.682804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.683052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.683112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.683359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.683422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.683685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.683745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.683968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.684027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.684221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.684281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.684528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.684589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.684797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.684856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.685040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.685102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.685367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.685429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.685660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.685720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.685950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.686022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.686261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.686339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.686568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.686629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.686862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.686924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.687198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.687258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.687571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.687640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.687873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.687933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.688184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.688256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.688583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.688649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.691530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.691637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.691990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.692056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.692345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.692410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.692739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.692805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.693034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.693098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.693385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.693453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.693725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.693790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.694033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.694098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.694392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.694460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.694710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.694775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.695033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.938 [2024-11-26 18:27:22.695098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.938 qpair failed and we were unable to recover it. 00:31:34.938 [2024-11-26 18:27:22.695364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.695434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.695737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.695824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.696100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.696166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.696429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.696500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.696767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.696833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.697052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.697121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.697416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.697485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.697698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.697765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.697979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.698048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.698358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.698426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.698713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.698778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.698973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.699039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.699360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.699428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.699683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.699748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.699968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.700038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.700356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.700423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.700688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.700754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.700943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.701011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.701264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.701350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.701640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.701705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.701909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.701978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.702225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.702293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.702619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.702685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.702974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.703040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.703287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.703377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.703586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.703654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.703917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.703982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.704217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.704281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.704582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.704651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.704940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.705006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.705293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.705388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.705634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.705704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.705943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.706007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.706267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.706354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.706649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.706715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.706999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.707064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.707276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.707363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.707592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.939 [2024-11-26 18:27:22.707660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.939 qpair failed and we were unable to recover it. 00:31:34.939 [2024-11-26 18:27:22.707952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.708019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.708271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.708356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.708602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.708667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.708898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.708975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.709227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.709292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.709536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.709601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.709861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.709930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.710192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.710257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.710506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.710568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.710879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.710944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.711203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.711274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.711548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.711617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.711835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.711901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.712158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.712226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.712499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.712565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.712781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.712849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.713106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.713173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.713418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.713488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.713781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.713847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.714087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.714152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.714381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.714447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.714738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.714803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.715045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.715113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.715336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.715405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.715623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.715692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.715949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.716016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.716274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.716353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.716658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.716724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.716973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.717039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.717300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.717395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.717665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.717732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.718015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.718081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.718289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.718368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.940 [2024-11-26 18:27:22.718653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.940 [2024-11-26 18:27:22.718721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.940 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.719014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.719078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.719367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.719435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.719733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.719798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.720128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.720193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.720438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.720504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.720795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.720859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.721109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.721174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.721430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.721497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.721793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.721859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.722075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.722151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.722412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.722479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.722783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.722848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.723091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.723156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.723373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.723441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.723687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.723752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.724010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.724075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.724324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.724393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.724649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.724713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.725007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.725072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.725362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.725429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.725721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.725787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.726037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.726104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.726388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.726454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.726749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.726814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.727058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.727126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.727340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.727413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.727613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.727682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.727891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.727960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.728262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.728347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.728644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.728710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.729002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.729068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.729332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.729399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.729656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.729722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.730012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.730078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.730345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.730412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.730674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.730739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.730999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.731064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.731320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.941 [2024-11-26 18:27:22.731399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.941 qpair failed and we were unable to recover it. 00:31:34.941 [2024-11-26 18:27:22.731661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.731726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.731978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.732045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.732296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.732376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.732658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.732725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.733004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.733068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.733271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.733373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.733622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.733691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.733974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.734039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.734294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.734377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.734628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.734697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.734982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.735047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.735268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.735366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.735666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.735731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.735940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.736009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.736226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.736292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.736607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.736683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.736965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.737030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.737231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.737298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.737563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.737631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.737877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.737942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.738186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.738251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.738475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.738543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.738749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.738817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.739036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.739100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.739368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.739436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.739649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.739715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.739911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.739975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.740217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.740285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.740574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.740642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.740932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.740997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.741283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.741374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.741625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.741694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.741955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.742022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.742274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.742365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.742649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.742716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.742981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.743048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.743261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.743348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.743574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.942 [2024-11-26 18:27:22.743639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.942 qpair failed and we were unable to recover it. 00:31:34.942 [2024-11-26 18:27:22.743852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.743917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.744189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.744253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.744526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.744594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.744848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.744914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.745158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.745227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.745470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.745540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.745818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.745883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.746186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.746250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.746558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.746624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.746864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.746929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.747199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.747264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.747492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.747557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.747807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.747873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.748068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.748145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.748391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.748458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.748691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.748759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.749057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.749123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.749346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.749412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.749699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.749766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.749960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.750028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.750285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.750365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.750669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.750734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.750982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.751048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.751288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.751371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.751626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.751692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.751945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.752011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.752265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.752347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.752576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.752642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.752935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.753002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.753261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.753364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.753624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.753691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.753929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.753994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.754235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.754300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.754585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.754652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.754897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.754963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.755213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.755278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.755553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.755618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.755875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.755940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.756231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.756296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.756610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.943 [2024-11-26 18:27:22.756676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.943 qpair failed and we were unable to recover it. 00:31:34.943 [2024-11-26 18:27:22.756945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.757010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.757252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.757369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.757631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.757697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.757989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.758054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.758325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.758392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.758683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.758747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.759045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.759112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.759407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.759475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.759729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.759795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.760034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.760099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.760345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.760411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.760666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.760734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.760984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.761052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.761300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.761381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.761694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.761760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.762006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.762074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.762259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.762361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.762619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.762686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.762980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.763045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.763296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.763382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.763670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.763738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.763941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.764012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.764212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.764281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.764603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.764669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.764922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.764987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.765245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.765332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.765579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.765646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.765915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.765982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.766269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.766356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.766662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.766727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.766977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.767043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.767357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.767424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.767674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.767740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.767983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.768053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.768298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.768383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.768633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.768699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.944 [2024-11-26 18:27:22.768963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.944 [2024-11-26 18:27:22.769028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.944 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.769338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.769405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.769650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.769718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.769962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.770028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.770327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.770405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.770649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.770716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.770962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.771026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.771286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.771372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.771634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.771699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.771938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.772003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.772244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.772338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.772588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.772654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.772877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.772942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.773232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.773299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.773587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.773654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.773933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.773997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.774250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.774333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.774597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.774663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.774893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.774958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.775185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.775250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.775524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.775591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.775820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.775885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.776116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.776182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.776430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.776497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.776763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.776829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.777076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.777141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.777364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.777430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.777682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.777747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.777995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.778060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.778348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.778415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.778663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.778729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.779045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.779111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.779377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.779444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.779702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.779766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.780012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.780080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.780349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.780417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.780708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.780774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.781027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.781093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.781341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.781411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.781658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.781722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.781963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.782027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.782285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.945 [2024-11-26 18:27:22.782376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.945 qpair failed and we were unable to recover it. 00:31:34.945 [2024-11-26 18:27:22.782664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.782730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.782980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.783046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.783297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.783388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.783645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.783714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.783928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.783993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.784242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.784328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.784592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.784656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.784864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.784930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.785221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.785287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.785594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.785660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.785909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.785977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.786231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.786298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.786564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.786632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.786895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.786961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.787208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.787273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.787545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.787611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.787915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.787981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.788227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.788292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.788574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.788642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.788894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.788961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.789210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.789279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.789607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.789674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.789923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.789989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.790211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.790276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.790552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.790617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.790901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.790969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.791266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.791352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.791615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.791679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.791970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.792036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.792266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.792354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.792642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.792708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.792901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.792970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.793255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.793341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.793628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.793693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.793977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.794044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.794339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.794407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.794611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.794676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.794964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.795030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.795281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.795363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.795617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.795683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.795926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.946 [2024-11-26 18:27:22.795992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.946 qpair failed and we were unable to recover it. 00:31:34.946 [2024-11-26 18:27:22.796272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.796375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.796634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.796710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.796928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.796997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.797217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.797285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.797557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.797623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.797885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.797950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.798240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.798324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.798619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.798684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.798967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.799033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.799245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.799334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.799626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.799693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.799949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.800014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.800274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.800359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.800577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.800645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.800887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.800952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.801229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.801294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.801561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.801628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.801922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.801987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.802217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.802283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.802525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.802592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.802881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.802946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.803200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.803266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.803500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.803568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.803869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.803935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.804158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.804224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.804494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.804560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.804746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.804813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.805095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.805162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.805390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.805458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.805669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.805735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.805982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.806048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.806341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.806407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.806628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.806694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.806934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.807000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.807281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.807363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.807613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.807679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.807950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.808015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.808318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.808385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.808634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.808702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.808940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.809007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.809250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.809353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.947 [2024-11-26 18:27:22.809606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.947 [2024-11-26 18:27:22.809684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.947 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.809906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.809971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.810178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.810244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.810528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.810594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.810843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.810910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.811169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.811234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.811509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.811575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.811832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.811898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.812146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.812210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.812513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.812580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.812875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.812939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.813236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.813301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.813539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.813604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.813797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.813860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.814144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.814210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.814453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.814520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.814789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.814853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.815139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.815204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.815467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.815536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.815750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.815817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.816073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.816139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.816413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.816481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.816770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.816837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.817133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.817200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.817495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.817563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.817866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.817933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.818194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.818259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.818514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.818583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.818868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.818934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.819144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.819210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.819513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.819580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.819786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.819852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.820090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.820155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.820379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.820448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.820652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.820720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.820965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.821033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.821256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.821336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.821562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.821631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.821898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.821964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.948 [2024-11-26 18:27:22.822174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.948 [2024-11-26 18:27:22.822240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.948 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.822505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.822584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.822886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.822952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.823186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.823251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.823515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.823581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.823830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.823898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.824119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.824184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.824398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.824466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.824716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.824781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.825015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.825080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.825356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.825423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.825681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.825745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.826036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.826101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.826318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.826387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.826640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.826706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.826970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.827037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.827279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.827361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.827565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.827630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.827877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.827947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.828213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.828279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.828558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.828624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.828867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.828931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.829142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.829209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.829483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.829550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.829798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.829864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.830149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.830216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.830511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.830578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.830863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.830928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.831163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.831230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.831499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.831565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.831766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.831831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.832092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.832158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.832409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.832477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.832760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.832826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.833065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.833131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.833359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.833425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.833706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.833772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.834033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.834099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.834347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.834414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.834637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.834702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.834999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.835063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.835320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.835399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.835643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.835708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.835998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.836064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.836329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.949 [2024-11-26 18:27:22.836397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.949 qpair failed and we were unable to recover it. 00:31:34.949 [2024-11-26 18:27:22.836651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.836720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.836943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.837011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.837324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.837393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.837657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.837723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.838019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.838085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.838297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.838402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.838651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.838719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.839020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.839086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.839373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.839440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.839719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.839784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.840083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.840148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.840369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.840438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.840663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.840729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.840972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.841038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.841294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.841377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.841594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.841659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.841942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.842007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.842368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.842436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.842687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.842753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.842986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.843052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.843248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.843330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.843578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.843646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.843901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.843966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.844270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.844352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.844627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.844693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.844986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.845050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.845274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.845360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.845645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.845711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.845996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.846061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.846341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.846408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.846704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.846770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.847072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.847137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.847432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.847500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.847797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.847864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.848130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.848196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.848434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.848500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.848751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.848830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.849140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.849207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.849453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.849519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.849736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.849804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.850050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.850118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.850381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.850448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.850703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.850768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.850977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.950 [2024-11-26 18:27:22.851041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.950 qpair failed and we were unable to recover it. 00:31:34.950 [2024-11-26 18:27:22.851275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.851361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.851656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.851722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.851930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.851997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.852241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.852322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.852541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.852608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.852852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.852916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.853175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.853241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.853502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.853569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.853825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.853890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.854119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.854185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.854437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.854505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.854791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.854857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.855128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.855193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.855502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.855570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.855821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.855886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.856141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.856228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.856503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.856571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.856810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.856877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.857136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.857202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.857514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.857584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.857874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.857939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.858194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.858259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.858598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.858667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.858960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.859025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.859283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.859366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.859666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.859733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.859988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.860057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.860354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.860423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.860725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.860793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.861090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.861160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:34.951 [2024-11-26 18:27:22.861440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:34.951 [2024-11-26 18:27:22.861507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:34.951 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.861762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.861828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.862086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.862163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.862437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.862506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.862716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.862784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.862991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.863060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.863282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.863366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.863611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.863677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.863937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.864003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.864290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.864377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.864577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.864649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.864866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.864941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.865192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.865257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.865500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.865566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.865818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.865887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.866141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.866208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.866500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.231 [2024-11-26 18:27:22.866568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.231 qpair failed and we were unable to recover it. 00:31:35.231 [2024-11-26 18:27:22.866834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.866899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.867192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.867257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.867524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.867590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.867810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.867876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.868091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.868156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.868409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.868478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.868723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.868788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.869012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.869079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.869369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.869437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.869688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.869754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.869954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.870018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.870253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.870349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.870588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.870654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.870942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.871006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.871321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.871387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.871678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.871743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.872024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.872089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.872322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.872389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.872609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.872674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.872914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.872979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.873269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.873354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.873599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.873665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.873892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.873958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.874181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.874246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.874550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.874618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.874878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.874956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.875247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.875332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.875591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.875656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.875875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.875953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.876243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.876329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.876535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.876602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.876844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.876912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.877166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.877232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.877491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.877557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.877764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.877829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.878075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.878140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.878439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.878506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.878757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.878822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.232 qpair failed and we were unable to recover it. 00:31:35.232 [2024-11-26 18:27:22.879070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.232 [2024-11-26 18:27:22.879136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.879398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.879464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.879710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.879778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.880038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.880106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.880323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.880391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.880603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.880669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.880906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.880973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.881266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.881345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.881597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.881663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.881928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.881994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.882202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.882270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.882560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.882626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.882862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.882926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.883165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.883229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.883503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.883570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.883787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.883854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.884135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.884201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.884414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.884479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.884733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.884798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.885098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.885163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.885406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.885476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.885730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.885796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.886084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.886149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.886409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.886479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.886691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.886758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.887051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.887117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.887378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.887445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.887703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.887781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.888033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.888099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.888367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.888435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.888676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.888741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.888992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.889059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.889275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.889358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.889560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.889626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.889871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.889938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.890223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.890289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.890612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.890677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.890905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.233 [2024-11-26 18:27:22.890974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.233 qpair failed and we were unable to recover it. 00:31:35.233 [2024-11-26 18:27:22.891221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.891286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.891561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.891627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.891869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.891933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.892207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.892274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.892562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.892628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.892862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.892928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.893142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.893209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.893511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.893578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.893795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.893861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.894118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.894185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.894431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.894501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.894724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.894789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.895010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.895075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.895429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.895694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.895760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.896046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.896112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.896414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.896481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.896668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.896733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.896978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.897045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.897291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.897374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.897639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.897704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.897910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.897975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.898192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.898258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.898535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.898601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.898846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.898914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.899184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.899249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.899539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.899606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.899863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.899929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.900230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.900295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.900577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.900653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.900897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.900966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.901245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.901330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.901540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.901605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.901866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.901932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.902189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.902254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.902575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.902675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.902979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.903048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.903296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.903380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.234 qpair failed and we were unable to recover it. 00:31:35.234 [2024-11-26 18:27:22.903663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.234 [2024-11-26 18:27:22.903729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.904014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.904080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.904358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.904425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.904625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.904693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.904950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.905015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.905274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.905355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.905651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.905715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.905998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.906066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.906282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.906371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.906626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.906691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.906891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.906956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.907193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.907257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.907528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.907593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.907880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.907944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.908150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.908214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.908511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.908577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.908821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.908884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.909130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.909197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.909440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.909507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.909719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.909784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.910067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.910131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.910352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.910421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.910669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.910733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.910971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.911035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.911217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.911282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.911541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.911605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.911854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.911921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.912166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.912231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.912475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.912541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.912787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.912851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.913133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.913198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.913455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.913533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.913790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.913854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.914113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.914179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.914455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.914521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.914763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.914828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.915102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.915165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.915391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.915456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.235 [2024-11-26 18:27:22.915670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.235 [2024-11-26 18:27:22.915734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.235 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.915954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.916017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.916262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.916343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.916641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.916705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.917001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.917065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.917290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.917369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.917628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.917694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.918011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.918075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.918375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.918442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.918690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.918754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.918973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.919039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.919291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.919375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.919657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.919720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.919963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.920027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.920325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.920392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.920686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.920750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.920968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.921033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.921325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.921392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.921636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.921702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.921923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.921988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.922248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.922343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.922551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.922616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.922870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.922936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.923228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.923293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.923519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.923594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.923874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.923941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.924146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.924214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.924483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.924550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.924740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.924807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.925062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.925126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.925398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.925464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.925759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.925823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.926074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.236 [2024-11-26 18:27:22.926138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.236 qpair failed and we were unable to recover it. 00:31:35.236 [2024-11-26 18:27:22.926386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.926451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.926736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.926801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.927084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.927148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.927391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.927457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.927702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.927765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.928054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.928117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.928381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.928447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.928698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.928762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.928974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.929037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.929332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.929398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.929645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.929710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.929956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.930023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.930220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.930287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.930601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.930667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.930924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.930989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.931245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.931327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.931556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.931621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.931902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.931968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.932192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.932257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.932491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.932555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.932802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.932866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.933113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.933177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.933418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.933483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.933744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.933808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.934054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.934117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.934395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.934461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.934743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.934808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.935098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.935174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.935449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.935515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.935757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.935820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.936058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.936124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.936362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.936428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.936669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.936733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.936976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.937040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.937246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.937328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.937589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.937652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.937886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.937950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.938244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.237 [2024-11-26 18:27:22.938336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.237 qpair failed and we were unable to recover it. 00:31:35.237 [2024-11-26 18:27:22.938641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.938706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.939001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.939064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.939261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.939342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.939612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.939678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.939873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.939936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.940186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.940250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.940480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.940548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.940830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.940894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.941107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.941171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.941426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.941492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.941719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.941784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.942073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.942137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.942415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.942480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.942738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.942802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.943079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.943143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.943388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.943455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.943683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.943748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.943978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.944044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.944330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.944395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.944630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.944696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.944947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.945013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.945324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.945389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.945632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.945697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.945934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.946002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.946256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.946351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.946650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.946714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.946980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.947045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.947296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.947377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.947587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.947651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.947937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.948012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.948199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.948262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.948543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.948623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.948880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.948945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.949184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.949250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.949505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.949571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.949823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.949888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.950086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.950154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.950422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.950489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.238 qpair failed and we were unable to recover it. 00:31:35.238 [2024-11-26 18:27:22.950682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.238 [2024-11-26 18:27:22.950745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.950998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.951065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.951280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.951364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.951657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.951721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.951969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.952036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.952287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.952367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.952626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.952690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.952983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.953047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.953339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.953406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.953638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.953702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.953894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.953958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.954159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.954224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.954467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.954535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.954789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.954854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.955102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.955166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.955450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.955517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.955766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.955833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.956118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.956182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.956415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.956480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.956732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.956797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.957083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.957147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.957384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.957450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.957713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.957778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.957977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.958044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.958297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.958376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.958667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.958732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.958971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.959036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.959283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.959368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.959575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.959638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.959886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.959951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.960147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.960211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.960478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.960551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.960804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.960865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.961080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.961141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.961371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.961436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.961685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.961755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.962044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.962110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.962345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.962413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.239 [2024-11-26 18:27:22.962654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.239 [2024-11-26 18:27:22.962718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.239 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.962977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.963042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.963333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.963402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.963633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.963697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.963937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.964004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.964260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.964352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.964640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.964707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.964934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.964998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.965281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.965359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.965630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.965695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.965937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.966001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.966235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.966298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.966531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.966599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.966848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.966912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.967152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.967215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.967484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.967549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.967785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.967850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.968061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.968125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.968411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.968478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.968723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.968790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.969053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.969117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.969361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.969427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.969688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.969754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.969954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.970017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.970262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.970360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.970622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.970687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.970971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.971037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.971244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.971325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.971574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.971638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.971935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.971999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.972243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.972323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.972580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.972644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.972895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.972960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.973200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.973277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.973569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.973634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.240 qpair failed and we were unable to recover it. 00:31:35.240 [2024-11-26 18:27:22.973921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.240 [2024-11-26 18:27:22.973985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.974242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.974321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.974606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.974670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.974925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.974989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.975276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.975356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.975571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.975638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.975858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.975923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.976196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.976260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.976475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.976541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.976820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.976884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.977085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.977150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.977372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.977438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.977699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.977765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.978070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.978135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.978356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.978421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.978610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.978671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.978966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.979029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.979280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.979363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.979587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.979654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.979933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.979999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.980192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.980257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.980496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.980561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.980803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.980868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.981072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.981140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.981424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.981490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.981790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.981856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.982098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.982163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.982447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.982513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.982765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.982830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.983088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.983155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.983342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.983409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.983653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.983717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.983966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.984030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.984273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.984370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.984662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.984726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.984977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.985044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.985243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.985323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.985526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.985590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.241 [2024-11-26 18:27:22.985881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.241 [2024-11-26 18:27:22.985956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.241 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.986211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.986276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.986536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.986601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.986852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.986921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.987161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.987227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.987514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.987579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.987866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.987931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.988225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.988288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.988517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.988584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.988807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.988872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.989121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.989186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.989412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.989479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.989694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.989758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.990013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.990077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.990378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.990444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.990680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.990745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.991039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.991104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.991389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.991455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.991674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.991738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.991991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.992059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.992349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.992415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.992709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.992775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.993034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.993099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.993341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.993410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.993626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.993691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.993909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.993973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.994216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.994281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.994543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.994607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.994889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.994953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.995206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.995271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.995542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.995608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.995867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.995932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.996144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.996209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.996502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.996568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.996808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.996874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.997120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.997184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.997417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.997482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.997696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.997760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.242 [2024-11-26 18:27:22.997971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.242 [2024-11-26 18:27:22.998035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.242 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:22.998291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:22.998370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:22.998658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:22.998734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:22.999037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:22.999103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:22.999354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:22.999420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:22.999710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:22.999775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.000022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.000088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.000340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.000406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.000621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.000690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.000987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.001052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.001292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.001376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.001581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.001647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.001940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.002004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.002291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.002392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.002644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.002708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.003004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.003069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.003375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.003442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.003688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.003753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.003992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.004058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.004282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.004364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.004649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.004715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.005008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.005065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.005217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.005251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.005374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.005408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.005524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.005560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.005677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.005714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.005857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.005892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.006026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.006062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.006173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.006207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.006326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.006363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.006478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.006512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.006622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.006656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.006829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.006894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.007184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.007250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.007460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.007494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.007727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.007792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.008133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.008198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.008425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.008460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.008633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.243 [2024-11-26 18:27:23.008699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.243 qpair failed and we were unable to recover it. 00:31:35.243 [2024-11-26 18:27:23.008906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.008941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.009056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.009120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.009334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.009401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.009549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.009589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.009735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.009768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.010008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.010073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.010285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.010340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.010497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.010530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.010794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.010859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.011044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.011112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.011369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.011404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.011548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.011582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.011723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.011756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.011856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.011913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.012154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.012220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.012447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.012482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.012612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.012686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.012936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.013003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.013268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.013357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.013530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.013565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.013763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.013828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.014131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.014195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.014448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.014483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.014675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.014742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.014969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.015034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.015295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.015390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.015568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.015624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.015837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.015870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.016008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.016044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.016293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.016375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.016499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.016533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.016699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.016732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.016935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.017000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.017262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.017353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.017493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.017527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.017769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.017834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.018043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.018108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.018361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.018395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.244 qpair failed and we were unable to recover it. 00:31:35.244 [2024-11-26 18:27:23.018533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.244 [2024-11-26 18:27:23.018568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.018782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.018816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.019024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.019089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.019378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.019414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.019522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.019556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.019680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.019719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.020011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.020077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.020369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.020434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.020665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.020730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.020954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.021019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.021227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.021294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.021600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.021674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.021891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.021959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.022207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.022272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.022506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.022573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.022821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.022885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.023187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.023252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.023578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.023678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.024023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.024104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.024474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.024555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.024840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.024909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.025168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.025233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.025497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.025563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.025751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.025816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.026069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.026134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.026396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.026463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.026706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.026771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.027051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.027114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.027412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.027479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.027734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.027797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.028049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.028116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.028381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.028449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.245 qpair failed and we were unable to recover it. 00:31:35.245 [2024-11-26 18:27:23.028679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.245 [2024-11-26 18:27:23.028745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.029023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.029088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.029314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.029382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.029603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.029671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.029916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.029980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.030200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.030265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.030546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.030611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.030860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.030928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.031145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.031214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.031522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.031588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.031850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.031916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.032162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.032226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.032524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.032590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.032885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.032961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.033260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.033339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.033587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.033652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.033922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.033987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.034243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.034344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.034606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.034672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.034930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.034996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.035294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.035377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.035676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.035741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.035942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.036012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.036257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.036337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.036627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.036691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.036945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.037012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.037260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.037341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.037652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.037716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.037965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.038030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.038333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.038401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.038682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.038747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.039035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.039099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.039400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.039468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.039734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.039799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.040004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.040069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.040329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.040397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.040678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.040744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.041033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.041097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.246 qpair failed and we were unable to recover it. 00:31:35.246 [2024-11-26 18:27:23.041360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.246 [2024-11-26 18:27:23.041427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.041714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.041780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.042034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.042101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.042327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.042395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.042648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.042713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.042970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.043035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.043335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.043402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.043683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.043749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.044005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.044070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.044268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.044350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.044604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.044670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.044958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.045023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.045262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.045347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.045599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.045663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.045878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.045944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.046192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.046267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.046580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.046647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.046954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.047020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.047262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.047353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.047648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.047714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.047939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.048004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.048282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.048370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.048664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.048729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.048949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.049013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.049212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.049279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.049533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.049598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.049850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.049915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.050125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.050190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.050400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.050466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.050739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.050804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.051050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.051115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.051349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.051415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.051664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.051728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.052012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.052078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.052330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.052417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.052670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.052738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.052986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.053052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.053258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.053341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.247 [2024-11-26 18:27:23.053626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.247 [2024-11-26 18:27:23.053691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.247 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.053984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.054049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.054294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.054375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.054640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.054706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.055023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.055120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.055418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.055491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.055751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.055821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.056074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.056139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.056384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.056451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.056704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.056770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.057055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.057119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.057381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.057448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.057657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.057722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.057972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.058036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.058330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.058394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.058691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.058756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.059005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.059070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.059277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.059356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.059656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.059722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.059961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.060027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.060283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.060368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.060640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.060704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.060952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.061020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.061325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.061391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.061689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.061753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.062001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.062064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.062350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.062415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.062698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.062761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.063006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.063072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.063367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.063432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.063672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.063736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.063983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.064058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.064346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.064411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.064715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.064780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.065067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.065130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.065381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.065445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.065704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.065767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.066011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.066075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.066292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.066374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.248 qpair failed and we were unable to recover it. 00:31:35.248 [2024-11-26 18:27:23.066584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.248 [2024-11-26 18:27:23.066649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.066890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.066953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.067183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.067246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.067471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.067537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.067752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.067818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.068023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.068088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.068390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.068456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.068713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.068778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.069001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.069065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.069331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.069396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.069674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.069737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.069961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.070025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.070318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.070383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.070645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.070709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.070952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.071018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.071276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.071361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.071577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.071641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.071855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.071919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.072186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.072251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.072567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.072632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.072892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.072956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.073206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.073274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.073553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.073619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.073910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.073974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.074222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.074285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.074595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.074658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.074918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.074982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.075219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.075282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.075587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.075650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.075872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.075936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.076209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.076273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.076582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.076647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.076859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.076927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.077221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.077287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.077557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.249 [2024-11-26 18:27:23.077625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.249 qpair failed and we were unable to recover it. 00:31:35.249 [2024-11-26 18:27:23.077871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.077935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.078143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.078207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.078457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.078522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.078763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.078828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.079048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.079112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.079367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.079434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.079710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.079777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.080020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.080083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.080363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.080429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.080682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.080747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.080994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.081059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.081317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.081384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.081677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.081741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.081951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.082016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.082231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.082294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.082602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.082667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.082911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.082974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.083265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.083352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.083600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.083663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.083896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.083960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.084251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.084355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.084577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.084641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.084862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.084926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.085175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.085239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.085468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.085531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.085793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.085867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.086111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.086176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.086445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.086511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.086720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.086783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.087023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.087087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.087294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.087374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.087586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.087649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.087849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.087913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.088163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.088231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.088593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.088663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.088950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.089014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.089294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.089376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.089620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.089687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.250 [2024-11-26 18:27:23.089968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.250 [2024-11-26 18:27:23.090032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.250 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.090296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.090381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.090681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.090745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.090990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.091054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.091337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.091404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.091653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.091718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.092007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.092070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.092365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.092431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.092623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.092686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.092931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.092995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.093242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.093320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.093607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.093671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.093930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.093996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.094245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.094323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.094568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.094633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.094929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.094992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.095265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.095348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.095605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.095669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.095933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.095996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.096250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.096347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.096647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.096710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.096954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.097018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.097259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.097345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.097597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.097659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.097852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.097916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.098211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.098276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.098540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.098608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.098815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.098879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.099163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.099239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.099471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.099536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.099817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.099881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.100129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.100193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.100462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.100527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.100830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.100894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.101146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.101212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.101477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.101544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.101845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.101909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.102155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.102220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.102480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.102546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.251 [2024-11-26 18:27:23.102825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.251 [2024-11-26 18:27:23.102888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.251 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.103132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.103198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.103482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.103547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.103771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.103835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.104094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.104159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.104418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.104485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.104733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.104796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.105074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.105137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.105420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.105486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.105699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.105762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.105988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.106051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.106287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.106364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.106643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.106706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.106936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.107000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.107233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.107299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.107587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.107653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.107912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.107987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.108194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.108258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.108559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.108623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.108901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.108966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.109181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.109245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.109507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.109574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.109859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.109924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.110178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.110242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.110549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.110615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.110910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.110973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.111253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.111337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.111542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.111605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.111892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.111956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.112196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.112258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.112637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.112701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.112984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.113047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.113339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.113404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.113616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.113679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.113919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.113982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.114217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.114279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.114538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.114602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.114837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.114901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.115183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.115247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.252 qpair failed and we were unable to recover it. 00:31:35.252 [2024-11-26 18:27:23.115571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.252 [2024-11-26 18:27:23.115635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.115924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.115988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.116233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.116296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.116550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.116613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.116886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.116948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.117200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.117263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.117547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.117610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.117832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.117894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.118188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.118252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.118525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.118589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.118802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.118867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.119115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.119180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.119435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.119501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.119741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.119808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.120089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.120154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.120406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.120472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.120752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.120816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.121054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.121122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.121352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.121434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.121685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.121748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.121988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.122055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.122334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.122399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.122677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.122741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.122989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.123053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.123296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.123392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.123683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.123747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.123988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.124051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.124294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.124378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.124626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.124690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.124944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.125008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.125300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.125383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.125646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.125709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.125937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.126001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.126254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.126335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.126584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.126647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.126940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.127004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.127205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.127270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.127603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.127669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.127911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.253 [2024-11-26 18:27:23.127977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.253 qpair failed and we were unable to recover it. 00:31:35.253 [2024-11-26 18:27:23.128221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.128283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.128566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.128631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.128845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.128910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.129109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.129176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.129423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.129490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.129775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.129840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.130098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.130172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.130417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.130482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.130693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.130758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.130964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.131027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.131340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.131405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.131688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.131752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.131998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.132064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.132321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.132386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.132632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.132698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.132959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.133022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.133261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.133344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.133551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.133616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.133863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.133926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.134205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.134268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.134533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.134599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.134880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.134943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.135225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.135290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.135566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.135632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.135870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.135934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.136165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.136229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.136500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.136564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.136860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.136923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.137208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.137271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.137538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.137603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.137879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.137942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.138221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.138284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.138561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.138626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.138907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.138970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.139174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.254 [2024-11-26 18:27:23.139237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.254 qpair failed and we were unable to recover it. 00:31:35.254 [2024-11-26 18:27:23.139436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.139502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.139771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.139834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.140083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.140146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.140350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.140417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.140669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.140732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.141019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.141083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.141365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.141431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.141675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.141741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.141975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.142039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.142296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.142375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.142652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.142715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.142959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.143023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.143231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.143332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.143587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.143655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.143895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.143959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.144237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.144324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.144612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.144677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.144934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.144996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.145243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.145327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.145571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.145636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.145876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.145939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.146142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.146205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.146465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.146531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.146736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.146799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.147041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.147107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.147367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.147434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.147701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.147764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.148012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.148075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.148330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.148396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.148644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.148707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.148963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.149027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.149284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.149363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.149651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.149714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.149966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.150030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.150239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.150318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.150600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.150665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.150919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.150985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.151251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.255 [2024-11-26 18:27:23.151345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.255 qpair failed and we were unable to recover it. 00:31:35.255 [2024-11-26 18:27:23.151637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.151700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.151971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.152044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.152269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.152355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.152632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.152694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.152941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.153005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.153252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.153333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.153570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.153633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.153883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.153947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.154195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.154259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.154528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.154590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.154871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.154934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.155161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.155224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.155496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.155560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.155796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.155859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.156138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.156201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.156474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.156540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.156778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.156841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.157067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.157128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.157339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.157424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.157707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.157772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.157962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.158025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.158266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.158344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.158596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.158660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.158939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.159001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.159281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.159360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.159654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.159718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.159972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.160034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.160268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.160362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.160606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.160669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.160919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.160982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.161247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.161330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.161582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.161646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.161889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.161953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.162214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.162277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.162572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.162636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.162886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.162951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.163233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.163296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.163564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.163628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.256 [2024-11-26 18:27:23.163907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.256 [2024-11-26 18:27:23.163969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.256 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.164169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.164234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.164520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.164585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.164826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.164889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.165132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.165206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.165462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.165526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.165754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.165817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.166069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.166133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.166380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.166445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.166730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.166793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.167039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.167103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.167364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.167428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.167702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.167766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.167967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.168030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.168267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.168345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.168638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.168701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.168909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.168972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.169254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.169329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.169593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.169656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.169905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.169967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.170208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.170270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.170581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.170644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.170890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.170952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.171171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.171234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.171535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.171601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.171879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.171941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.172217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.172279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.172594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.172658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.172952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.173015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.173260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.173343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.173632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.173695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.173937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.173999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.174252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.174333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.174584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.174648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.174870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.174932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.175211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.175273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.175494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.175557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.175789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.175852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.176139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.176200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.257 [2024-11-26 18:27:23.176459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.257 [2024-11-26 18:27:23.176523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.257 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.176797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.176861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.177142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.177206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.177469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.177533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.177811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.177874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.178162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.178226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.178485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.178549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.178740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.178807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.179044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.179108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.179368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.179432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.179634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.179701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.179961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.180027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.180286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.180362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.180552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.180615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.180863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.180927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.181185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.181249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.181514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.181578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.181863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.181925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.182126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.182192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.182496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.182562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.182815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.182880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.183132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.183196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.183454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.183519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.183794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.183856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.184066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.184129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.184366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.184431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.184655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.184718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.184998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.185061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.185351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.185415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.185627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.185689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.185984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.186047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.186329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.186394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.186575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.186639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.186889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.186970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.187255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.187333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.258 qpair failed and we were unable to recover it. 00:31:35.258 [2024-11-26 18:27:23.187569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.258 [2024-11-26 18:27:23.187632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.187879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.187942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.188237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.188301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.188600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.188665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.188901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.188963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.189240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.189322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.189676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.189921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.189986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.190265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.190348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.190644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.190708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.190960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.191022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.191260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.191338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.191628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.191693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.191948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.192011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.192258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.192354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.192606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.192672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.192912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.192976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.193245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.193326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.193588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.193652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.193932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.193995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.194272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.194353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.194634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.194697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.194944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.195008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.195285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.195366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.195602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.195665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.195912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.195975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.196240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.196332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.196619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.196683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.196899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.196960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.197244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.197326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.197576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.197640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.197889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.197952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.198204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.198267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.198499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.198562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.198815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.198877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.199098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.199161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.199364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.199429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.199627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.199693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.199976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.259 [2024-11-26 18:27:23.200039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.259 qpair failed and we were unable to recover it. 00:31:35.259 [2024-11-26 18:27:23.200339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.200404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.200700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.200762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.200970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.201035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.201237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.201299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.201610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.201673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.201921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.201984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.202281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.202359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.202582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.202644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.202899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.202962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.203244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.203324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.203534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.203600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.203862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.203926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.204209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.204271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.204547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.204611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.204871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.204934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.205175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.205237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.205545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.205610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.205852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.205918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.206157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.206222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.206490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.206556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.206796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.206858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.207105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.207168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.207450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.207516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.207710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.207773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.208010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.208073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.208357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.208423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.208709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.208772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.209061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.209134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.209379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.209443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.209684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.209746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.209957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.210020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.210298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.210375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.210621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.210685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.210927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.210990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.211232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.211294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.211602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.211665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.211952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.212015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.212237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.212299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.260 [2024-11-26 18:27:23.212534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.260 [2024-11-26 18:27:23.212601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.260 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.212854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.212921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.213201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.213266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.213599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.213663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.213893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.213957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.214200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.214264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.214572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.214635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.214924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.214987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.215238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.215322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.215562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.215624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.215872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.215935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.216141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.216208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.216449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.216513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.216698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.216762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.217004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.217067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.217357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.217422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.217726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.217791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.218044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.218108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.218334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.218399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.218637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.218700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.218925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.218990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.219282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.219359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.219636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.219699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.219986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.220050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.220294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.220401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.220644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.220709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.261 [2024-11-26 18:27:23.220991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.261 [2024-11-26 18:27:23.221054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.261 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.221272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.221359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.221576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.221640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.221885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.221948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.222226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.222321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.222617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.222682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.222910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.222974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.223228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.223292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.223536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.223599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.223858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.223921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.224198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.224261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.536 [2024-11-26 18:27:23.224596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.536 [2024-11-26 18:27:23.224660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.536 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.224903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.224967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.225182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.225248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.225551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.225655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.225939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.226019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.226225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.226295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.226545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.226614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.226891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.226960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.227209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.227274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.227510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.227578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.227821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.227886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.228130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.228193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.228440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.228506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.228741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.228805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.229053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.229118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.229370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.229438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.229690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.229757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.230053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.230118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.230373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.230439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.230680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.230745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.231049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.231113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.231390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.231456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.231704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.231772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.231986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.232050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.232289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.232368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.232593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.232659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.232901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.232966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.233264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.233345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.233605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.233671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.233920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.233987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.234282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.234361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.234653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.234718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.234962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.235027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.235247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.235336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.235586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.235651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.235890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.235957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.236243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.236330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.236553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.236615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.537 [2024-11-26 18:27:23.236902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.537 [2024-11-26 18:27:23.236966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.537 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.237217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.237281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.237552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.237620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.237901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.237966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.238211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.238276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.238526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.238592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.238875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.238938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.239184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.239249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.239524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.239592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.239858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.239923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.240198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.240263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.240574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.240640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.240923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.240989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.241216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.241280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.241580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.241644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.241926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.241990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.242195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.242260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.242523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.242588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.242840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.242908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.243199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.243263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.243507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.243571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.243852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.243917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.244185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.244251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.244528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.244593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.244763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.244828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.245065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.245129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.245374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.245440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.245629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.245696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.245942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.246006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.246299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.246376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.246619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.246683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.246937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.247001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.247287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.247364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.247589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.247654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.247867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.247931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.248172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.248246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.248516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.248583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.248789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.248854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.538 qpair failed and we were unable to recover it. 00:31:35.538 [2024-11-26 18:27:23.249109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.538 [2024-11-26 18:27:23.249173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.249428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.249493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.249738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.249803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.250048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.250112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.250295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.250372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.250626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.250692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.250954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.251019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.251268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.251343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.251611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.251677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.251879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.251942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.252164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.252230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.252565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.252631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.252887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.252952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.253198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.253263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.253527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.253591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.253845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.253910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.254152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.254218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.254441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.254507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.254719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.254784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.255022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.255087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.255319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.255385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.255593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.255657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.255931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.255995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.256242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.256318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.256557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.256622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.256866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.256929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.257178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.257243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.257481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.257546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.257836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.257899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.258118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.258183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.258455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.258522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.258813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.258877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.259129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.259197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.259427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.259495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.259737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.259802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.260009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.260073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.260338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.260424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.539 [2024-11-26 18:27:23.260707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.539 [2024-11-26 18:27:23.260783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.539 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.260994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.261060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.261301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.261386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.261685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.261749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.261958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.262025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.262265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.262352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.262652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.262718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.263010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.263076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.263362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.263429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.263685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.263750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.264041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.264106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.264365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.264432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.264702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.264767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.264989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.265053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.265363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.265430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.265690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.265754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.266039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.266103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.266333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.266400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.266635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.266702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.266985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.267051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.267320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.267387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.267586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.267649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.267949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.268014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.268264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.268365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.268659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.268725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.268973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.269036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.269366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.269433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.269703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.269769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.270019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.270083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.270297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.270375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.270630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.270695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.270988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.271053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.271290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.271390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.271642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.271707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.271965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.272030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.272273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.272388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.272530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.272564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.272667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.272699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.540 [2024-11-26 18:27:23.272838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.540 [2024-11-26 18:27:23.272875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.540 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.272998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.273060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.273320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.273386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.273497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.273530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.273664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.273729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.273952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.274018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.274315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.274379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.274538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.274570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.274805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.274870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.275036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.275109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.275395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.275427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.275558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.275609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.275899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.275964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.276238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.276318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.276448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.276481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.276658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.276725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.276989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.277056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.277284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.277365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.277524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.277555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.277766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.277830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.278123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.278188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.278373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.278405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.278506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.278542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.278713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.278779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.279031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.279097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.279317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.279350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.279480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.279511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.279716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.279782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.280018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.280083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.280250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.280364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.280511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.280542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.280767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.280833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.281079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.281143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.541 [2024-11-26 18:27:23.281325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.541 [2024-11-26 18:27:23.281358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.541 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.281495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.281527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.281731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.281796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.282011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.282079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.282285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.282323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.282416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.282447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.282542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.282572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.282781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.282850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.283086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.283151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.283386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.283418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.283560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.283592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.283696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.283727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.283850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.283881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.283978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.284009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.284130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.284161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.284359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.284392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.284525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.284556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.284731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.284796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.285069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.285134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.285370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.285404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.285546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.285577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.285681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.285713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.285974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.286041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.286329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.286378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.286514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.286545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.286642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.286673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.286869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.286934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.287176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.287243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.287485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.287517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.287693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.287761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.287961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.288026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.288280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.288368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.288576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.288641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.288928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.288993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.289241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.289324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.289574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.289886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.542 [2024-11-26 18:27:23.289962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.542 qpair failed and we were unable to recover it. 00:31:35.542 [2024-11-26 18:27:23.290246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.290324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.290613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.290678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.290921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.290985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.291188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.291256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.291505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.291569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.291852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.291919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.292158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.292222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.292475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.292541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.292830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.292895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.293149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.293213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.293492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.293559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.293852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.293917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.294208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.294272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.294528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.294593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.294876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.294939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.295201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.295265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.295575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.295640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.295879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.295944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.296201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.296265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.296565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.296631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.296920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.296985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.297237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.297317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.297604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.297668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.297907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.297972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.298199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.298264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.298539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.298603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.298907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.298972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.299194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.299258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.299536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.299600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.299831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.299896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.300184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.300247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.300546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.300611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.300889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.300953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.301228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.301293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.301563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.301627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.301905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.301969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.543 [2024-11-26 18:27:23.302264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.543 [2024-11-26 18:27:23.302343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.543 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.302604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.302668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.302912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.302978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.303256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.303345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.303587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.303652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.303896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.303960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.304213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.304276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.304582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.304646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.304860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.304924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.305136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.305199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.305449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.305515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.305763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.305831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.306082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.306148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.306448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.306513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.306761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.306828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.307114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.307177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.307394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.307463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.307696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.307763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.308008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.308072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.308320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.308387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.308684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.308749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.309028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.309092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.309410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.309476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.309759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.309824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.310065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.310129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.310408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.310480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.310779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.310842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.311096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.311160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.311399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.311465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.311753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.311816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.312117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.312181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.312395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.312461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.312720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.312784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.313074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.313145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.313432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.313498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.313745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.313808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.314032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.314096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.314347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.314414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.544 [2024-11-26 18:27:23.314660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.544 [2024-11-26 18:27:23.314726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.544 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.314943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.315009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.315253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.315331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.315553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.315620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.315872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.315936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.316191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.316266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.316548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.316614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.316860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.316924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.317189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.317253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.317555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.317619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.317903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.317967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.318214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.318277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.318519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.318582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.318835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.318899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.319136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.319198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.319473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.319538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.319789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.319854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.320105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.320171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.320389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.320459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.320760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.320825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.321077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.321140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.321394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.321460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.321748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.321812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.322057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.322120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.322341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.322406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.322610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.322677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.322966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.323030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.323331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.323401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.323660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.323725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.324010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.324074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.324373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.324438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.324720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.324784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.325076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.325140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.325394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.325460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.545 [2024-11-26 18:27:23.325719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.545 [2024-11-26 18:27:23.325782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.545 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.326074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.326138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.326441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.326507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.326756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.326822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.327113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.327176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.327473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.327539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.327824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.327887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.328141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.328206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.328584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.328652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.328912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.328976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.329169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.329233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.329550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.329626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.329881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.329944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.330181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.330246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.330557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.330622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.330918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.330981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.331239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.331317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.331578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.331642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.331830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.331892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.332178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.332242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.332549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.332615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.332864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.332927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.333213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.333277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.333533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.333597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.333903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.333966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.334196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.334261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.334525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.334593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.334851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.334914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.335208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.335271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.335540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.335607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.335891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.335955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.336240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.336319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.336552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.336615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.336859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.336924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.337210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.337273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.337520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.337586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.337828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.337891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.338183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.338247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.546 [2024-11-26 18:27:23.338521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.546 [2024-11-26 18:27:23.338587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.546 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.338847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.338912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.339212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.339276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.339524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.339588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.339874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.339939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.340199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.340263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.340523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.340589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.340852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.340919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.341125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.341188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.341439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.341508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.341722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.341786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.342024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.342087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.342379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.342445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.342726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.342801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.343046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.343112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.343416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.343483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.343725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.343790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.344067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.344130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.344415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.344481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.344729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.344795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.345039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.345105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.345360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.345425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.345711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.345776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.346065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.346130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.346424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.346782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.346847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.347098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.347162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.347467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.347532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.347838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.347902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.348107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.348173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.348427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.348493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.348723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.348788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.349044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.349108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.349359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.349424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.349681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.349748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.349957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.350024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.350271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.350350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.350601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.350665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.350865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.547 [2024-11-26 18:27:23.350931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.547 qpair failed and we were unable to recover it. 00:31:35.547 [2024-11-26 18:27:23.351226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.351291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.351591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.351657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.351836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.351900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.352140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.352204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.352470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.352537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.352775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.352838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.353098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.353165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.353462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.353528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.353769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.353832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.354087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.354151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.354436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.354502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.354795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.354859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.355056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.355122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.355367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.355434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.355641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.355717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.356006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.356069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.356326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.356395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.356653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.356718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.356999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.357063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.357365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.357431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.357679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.357743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.358034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.358098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.358397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.358463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.358764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.358832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.359085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.359149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.359435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.359500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.359786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.359850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.360099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.360163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.360415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.360482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.360734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.360798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.360992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.361057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.361343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.361409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.361664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.361730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.361969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.362034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.362267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.362348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.362631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.362697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.362945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.363012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.363273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.548 [2024-11-26 18:27:23.363352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.548 qpair failed and we were unable to recover it. 00:31:35.548 [2024-11-26 18:27:23.363564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.363628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.363891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.363956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.364220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.364284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.364590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.364656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.364946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.365011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.365262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.365344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.365632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.365698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.365948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.366012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.366223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.366287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.366555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.366620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.366905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.366968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.367266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.367345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.367585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.367649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.367897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.367964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.368243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.368323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.368546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.368611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.368828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.368907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.369202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.369265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.369553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.369618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.369819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.369882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.370132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.370196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.370508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.370574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.370823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.370886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.371133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.371197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.371514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.371580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.371829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.371892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.372093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.372159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.372411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.372478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.372737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.372803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.373045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.373110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.373382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.373446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.373745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.373808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.374068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.374132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.374416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.374481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.374679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.374743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.374987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.375052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.375322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.375389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.375647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.549 [2024-11-26 18:27:23.375710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.549 qpair failed and we were unable to recover it. 00:31:35.549 [2024-11-26 18:27:23.375960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.376025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.376266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.376345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.376640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.376705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.376991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.377056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.377298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.377395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.377627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.377692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.377930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.377994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.378276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.378357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.378602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.378668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.378907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.378972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.379185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.379251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.379511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.379575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.379869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.379932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.380217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.380281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.380559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.380624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.380904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.380968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.381262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.381340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.381597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.381661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.381908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.381983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.382233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.382297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.382624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.382687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.382900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.382963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.383200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.383263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.383527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.383593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.383880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.383944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.384151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.384216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.384490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.384556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.384852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.384915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.385162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.385229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.385544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.385610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.385901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.385965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.386225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.386289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.386597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.386662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.386959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.387023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.387238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.550 [2024-11-26 18:27:23.387319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.550 qpair failed and we were unable to recover it. 00:31:35.550 [2024-11-26 18:27:23.387613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.387678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.387970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.388034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.388280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.388363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.388656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.388721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.389011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.389075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.389380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.389447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.389708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.389773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.390028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.390092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.390296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.390376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.390637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.390701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.391005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.391070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.391341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.391408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.391710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.391775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.392061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.392126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.392380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.392446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.392750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.392815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.393109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.393173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.393426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.393494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.393738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.393802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.394042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.394107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.394361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.394427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.394724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.394787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.395039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.395104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.395383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.395459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.395711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.395775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.395995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.396060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.396323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.396388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.396634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.396699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.396960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.397025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.397340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.397406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.397663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.397727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.397977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.398041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.398256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.398334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.398592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.398656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.398907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.398972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.399265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.399345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.399646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.399711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.399986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.551 [2024-11-26 18:27:23.400050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.551 qpair failed and we were unable to recover it. 00:31:35.551 [2024-11-26 18:27:23.400282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.400368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.400665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.400730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.401025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.401089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.401373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.401443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.401695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.401762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.402052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.402116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.402334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.402400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.402617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.402683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.402920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.402985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.403277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.403365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.403662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.403726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.403936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.404003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.404229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.404294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.404557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.404621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.404900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.404964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.405218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.405282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.405570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.405636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.405914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.405978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.406197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.406261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.406505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.406570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.406880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.406943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.407228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.407292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.407570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.407638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.407927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.407992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.408269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.408350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.408604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.408682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.408980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.409045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.409255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.409336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.409631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.409695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.409943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.410007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.410289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.410374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.410617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.410684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.410951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.411016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.411319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.411384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.411669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.411732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.412024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.412088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.412331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.412397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.552 [2024-11-26 18:27:23.412654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.552 [2024-11-26 18:27:23.412717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.552 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.412920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.412984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.413217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.413282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.413602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.413667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.413916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.413980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.414232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.414296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.414562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.414627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.414884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.414949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.415187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.415251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.415562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.415627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.415876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.415939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.416188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.416253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.416518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.416586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.416836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.416903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.417159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.417225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.417516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.417582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.417790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.417863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.418150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.418214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.418473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.418540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.418789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.418854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.419112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.419176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.419435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.419504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.419792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.419857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.420074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.420141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.420390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.420459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.420703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.420769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.421046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.421112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.421372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.421438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.421695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.421773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.422062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.422127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.422393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.422458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.422759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.422822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.423079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.423144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.423383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.423449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.423741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.423806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.424021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.424086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.424287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.424370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.424627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.424692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.424970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.553 [2024-11-26 18:27:23.425034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.553 qpair failed and we were unable to recover it. 00:31:35.553 [2024-11-26 18:27:23.425275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.425354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.425566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.425633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.425916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.425980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.426236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.426301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.426537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.426605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.426805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.426870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.427124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.427189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.427494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.427561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.427804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.427870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.428097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.428161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.428452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.428519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.428762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.428826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.429115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.429179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.429406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.429472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.429751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.429814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.430102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.430166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.430431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.430498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.430783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.430848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.431053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.431117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.431378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.431443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.431742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.431805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.432055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.432120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.432408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.432474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.432734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.432798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.433037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.433101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.433350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.433415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.433702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.433766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.434021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.434089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.434345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.434410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.434639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.434703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.434971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.435036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.435286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.435382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.435635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.435700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.435945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.436011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.554 qpair failed and we were unable to recover it. 00:31:35.554 [2024-11-26 18:27:23.436326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.554 [2024-11-26 18:27:23.436391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.436606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.436670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.436915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.436980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.437267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.437345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.437611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.437675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.437875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.437943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.438160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.438226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.438529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.438594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.438875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.438939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.439231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.439296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.439605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.439670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.439976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.440041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.440337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.440403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.440624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.440689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.440934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.440998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.441296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.441374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.441660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.441724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.442017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.442081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.442324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.442390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.442645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.442709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.442936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.443000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.443243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.443337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.443634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.443709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.443908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.443975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.444256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.444338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.444628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.444693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.444979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.445042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.445263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.445341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.445645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.445710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.445961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.446025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.446271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.446349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.446594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.446659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.446914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.446977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.447215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.447280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.447562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.447627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.447833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.447899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.448206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.448271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.448576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.448642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.555 qpair failed and we were unable to recover it. 00:31:35.555 [2024-11-26 18:27:23.448935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.555 [2024-11-26 18:27:23.448998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.449285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.449364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.449637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.449703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.449921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.449985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.450255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.450334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.450632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.450697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.450934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.450999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.451281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.451362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.451622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.451685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.451931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.451995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.452247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.452325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.452589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.452653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.452898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.452962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.453211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.453275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.453574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.453638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.453938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.454002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.454242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.454322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.454537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.454601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.454883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.454947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.455230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.455296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.455640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.455703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.455945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.456009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.456255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.456341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.456615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.456678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.456968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.457043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.457293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.457378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.457668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.457732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.457986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.458051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.458318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.458385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.458673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.458737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.458976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.459040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.459337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.459402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.459695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.459759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.460052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.460116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.460370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.460437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.460726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.460791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.461053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.461117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.461399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.461464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.556 [2024-11-26 18:27:23.461774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.556 [2024-11-26 18:27:23.461839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.556 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.462092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.462156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.462403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.462468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.462756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.462819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.463075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.463142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.463393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.463459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.463752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.463816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.464077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.464140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.464381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.464451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.464719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.464784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.465022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.465086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.465337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.465405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.465704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.465768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.466020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.466086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.466315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.466380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.466673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.466737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.467039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.467104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.467399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.467465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.467716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.467782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.468034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.468098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.468379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.468465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.468684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.468748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.468959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.469023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.469268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.469351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.469602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.469668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.469966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.470030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.470287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.470379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.470679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.470744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.471036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.471101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.471360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.471427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.471688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.471752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.471946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.472009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.472249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.472331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.472562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.472627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.472911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.472975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.473223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.473288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.473565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.473630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.473879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.473944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.557 [2024-11-26 18:27:23.474199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.557 [2024-11-26 18:27:23.474263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.557 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.474566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.474630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.474898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.474962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.475224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.475287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.475599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.475663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.475915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.475979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.476231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.476295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.476556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.476620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.476867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.476932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.477218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.477281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.477586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.477651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.477857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.477921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.478161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.478224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.478523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.478588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.478877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.478942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.479202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.479267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.479558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.479625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.479874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.479941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.480237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.480318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.480582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.480647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.480938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.481002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.481292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.481371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.481621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.481686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.481966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.482031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.482331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.482396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.482648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.482712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.482957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.483020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.483288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.483384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.483625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.483700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.483946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.484013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.484290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.484372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.484631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.484695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.484947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.485011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.485251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.485330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.485619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.485684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.485976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.486038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.486290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.486379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.486610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.486674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.486926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.558 [2024-11-26 18:27:23.486989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.558 qpair failed and we were unable to recover it. 00:31:35.558 [2024-11-26 18:27:23.487271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.487349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.487637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.487702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.487953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.488017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.488332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.488398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.488623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.488688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.488890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.488952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.489196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.489259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.489557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.489621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.489833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.489895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.490193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.490257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.490519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.490586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.490873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.490936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.491155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.491219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.491515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.491580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.491821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.491886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.492135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.492200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.492464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.492530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.492819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.492883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.493170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.493234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.493512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.493576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.493832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.493897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.494191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.494255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.494525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.494589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.494884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.494948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.495227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.495291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.495605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.495669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.495889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.495955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.496248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.496328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.496582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.496646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.496926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.497000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.497279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.497359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.497579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.497643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.497894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.497958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.498218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.498281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.559 [2024-11-26 18:27:23.498550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.559 [2024-11-26 18:27:23.498616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.559 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.498920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.498984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.499253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.499341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.499605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.499672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.499958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.500021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.500231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.500295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.500527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.500591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.500835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.500898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.501109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.501173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.501438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.501506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.501799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.501863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.502115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.502182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.502426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.502492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.502774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.502837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.503102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.503167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.503422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.503488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.503702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.503765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.504054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.504119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.504411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.504477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.504672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.504738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.505031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.505095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.505360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.505427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.505706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.505771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.506028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.506092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.506343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.506412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.506714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.506777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.507005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.507068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.507358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.507424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.507723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.507786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.508059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.508125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.508412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.508478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.508776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.508840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.509097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.509162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.509448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.509514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.509777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.509841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.510129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.510205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.510506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.510571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.510821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.510885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.511179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.560 [2024-11-26 18:27:23.511243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.560 qpair failed and we were unable to recover it. 00:31:35.560 [2024-11-26 18:27:23.511547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.511612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.511896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.511960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.512253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.512330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.512628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.512691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.512909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.512976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.513236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.513300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.513545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.513609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.513912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.513979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.514228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.514295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.514591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.514655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.514911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.514978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.515222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.515288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.515605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.515670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.515918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.515984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.516240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.516318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.516600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.516664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.516951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.517016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.517270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.517349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.517640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.517704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.517948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.518013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.518261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.518358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.518605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.518669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.518961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.519024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.519300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.519386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.519610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.519677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.519889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.519952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.520172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.520236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.520498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.520564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.520825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.520890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.521187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.521250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.521524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.521589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.521834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.521897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.522122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.522186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.522424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.522490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.522749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.522813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.523062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.523129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.523355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.523432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.523722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.523786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.524029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.524093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.524343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.524408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.524691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.561 [2024-11-26 18:27:23.524755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.561 qpair failed and we were unable to recover it. 00:31:35.561 [2024-11-26 18:27:23.525047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.525111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.525406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.525471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.525725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.525789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.526017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.526081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.526355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.526422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.526740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.526805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.527010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.527078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.527375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.527440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.527651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.527726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.527992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.528058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.528346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.528411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.528634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.528699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.528939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.529007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.529217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.529282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.529546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.529612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.529860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.529925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.530174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.530237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.530453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.530518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.562 qpair failed and we were unable to recover it. 00:31:35.562 [2024-11-26 18:27:23.530736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.562 [2024-11-26 18:27:23.530801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.531063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.531127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.531393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.531458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.531692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.531759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.532024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.532097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.532328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.532400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.532616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.532681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.532939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.533011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.533210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.533277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.533600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.533666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.533914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.533980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.534242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.534339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.534565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.534632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.534873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.534938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.535201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.535266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.535501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.535566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.535852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.535916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.536110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.536185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.536410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.536476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.536757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.536821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.537120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.537183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.537449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.537514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.537754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.537819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.538106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.538168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.538394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.538460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.538696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.538760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.538984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.539048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.539337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.539402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.539661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.539728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.539974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.540038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.540332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.540397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.540696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.540761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.541026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.541090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.541378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.541442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.541642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.541708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.843 qpair failed and we were unable to recover it. 00:31:35.843 [2024-11-26 18:27:23.541966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.843 [2024-11-26 18:27:23.542031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.542277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.542356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.542576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.542643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.542924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.542989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.543227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.543293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.543539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.543604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.543867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.543932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.544110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.544174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.544392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.544460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.544721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.544785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.545014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.545079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.545335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.545403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.545655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.545718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.545996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.546060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.546359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.546424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.546709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.546774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.547029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.547093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.547342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.547411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.547714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.547777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.548079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.548144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.548376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.548443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.548728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.548793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.549010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.549085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.549338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.549406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.549662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.549726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.550034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.550099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.550350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.550415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.550659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.550723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.551007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.551070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.551361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.551427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.551669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.551735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.552012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.552076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.552360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.552427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.552683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.552749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.553001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.553065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.553327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.553393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.553698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.553763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.554050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.844 [2024-11-26 18:27:23.554113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.844 qpair failed and we were unable to recover it. 00:31:35.844 [2024-11-26 18:27:23.554373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.554439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.554734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.554798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.555046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.555112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.555347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.555412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.555684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.555748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.556042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.556106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.556356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.556423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.556715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.556778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.557023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.557090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.557380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.557445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.557693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.557758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.557996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.558061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.558331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.558397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.558613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.558679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.558977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.559041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.559247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.559345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.559600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.559664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.559965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.560028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.560331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.560397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.560654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.560717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.560961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.561025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.561317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.561384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.561646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.561711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.561995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.562060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.562354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.562431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.562671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.562734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.563014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.563078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.563336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.563402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.563658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.563722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.563964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.564029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.564231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.564294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.564559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.564623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.564870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.564933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.565165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.565228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.565489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.565556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.565759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.565827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.566069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.566133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.566367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.845 [2024-11-26 18:27:23.566432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.845 qpair failed and we were unable to recover it. 00:31:35.845 [2024-11-26 18:27:23.566730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.566796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.567039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.567102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.567342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.567407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.567641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.567674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.567777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.567812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.567952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.567985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.568128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.568161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.568336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.568398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.568534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.568577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.568691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.568724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.568868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.568903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.569026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.569105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.569372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.569408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.569565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.569600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.569865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.569898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.570012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.570045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.570227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.570292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.570451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.570484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.570595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.570629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.570764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.570798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.571033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.571097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.571318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.571383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.571492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.571526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.571643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.571677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.571836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.571899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.572157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.572191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.572491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.572531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.572692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.572756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.572992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.573065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.573322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.573397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.573544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.573579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.573728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.573784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.574018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.574084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.574282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.574380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.574530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.574563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.574669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.574723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.575020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.575085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.575376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.575411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.846 qpair failed and we were unable to recover it. 00:31:35.846 [2024-11-26 18:27:23.575555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.846 [2024-11-26 18:27:23.575588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.575727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.575761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.575975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.576041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.576292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.576367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.576515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.576548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.576684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.576766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.576980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.577044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.577300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.577386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.577526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.577560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.577764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.577828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.578126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.578190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.578436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.578471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.578618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.578651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.578773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.578842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.579128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.579193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.579424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.579458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.579609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.579643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.579805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.579872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.580116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.580179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.580413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.580448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.580586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.580620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.580724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.580783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.580997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.581061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.581299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.581383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.581502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.581535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.581655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.581689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.581979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.582043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.582349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.582384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.582525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.582564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.582721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.582785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.582986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.583053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.583361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.583396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.583508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.847 [2024-11-26 18:27:23.583542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.847 qpair failed and we were unable to recover it. 00:31:35.847 [2024-11-26 18:27:23.583747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.583810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.584059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.584123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.584392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.584426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.584540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.584573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.584714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.584780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.585017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.585070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.585327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.585380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.585527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.585561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.585733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.585793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.585995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.586057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.586331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.586385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.586504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.586536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.586683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.586716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.586924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.586988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.587269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.587326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.587501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.587534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.587716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.587783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.588073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.588137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.588350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.588419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.588659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.588725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.589013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.589076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.589331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.589400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.589632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.589698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.589944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.590008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.590293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.590370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.590629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.590697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.590965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.591029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.591327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.591393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.591679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.591744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.592045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.592108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.592348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.592415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.592650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.592714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.592972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.593035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.593276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.593355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.593646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.593710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.593897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.593960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.594234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.594299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.594615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.594678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.848 qpair failed and we were unable to recover it. 00:31:35.848 [2024-11-26 18:27:23.594932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.848 [2024-11-26 18:27:23.594996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.595252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.595352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.595652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.595716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.595975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.596039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.596238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.596324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.596584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.596648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.596897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.596962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.597214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.597247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.597481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.597546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.597801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.597864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.598101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.598165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.598410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.598476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.598762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.598825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.599108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.599171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.599431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.599467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.599581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.599614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.599859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.599922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.600218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.600281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.600566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.600630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.600923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.600987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.601236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.601318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.601550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.601617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.601865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.601929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.602178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.602242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.602516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.602598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.602893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.602959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.603237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.603329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.603638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.603702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.603990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.604024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.604159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.604192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.604380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.604445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.604697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.604766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.605066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.605129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.605421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.605487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.605774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.849 [2024-11-26 18:27:23.605839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.849 qpair failed and we were unable to recover it. 00:31:35.849 [2024-11-26 18:27:23.606127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.606160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.606299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.606340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.606575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.606639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.606873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.606939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.607206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.607271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.607499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.607563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.607774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.607838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.608126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.608190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.608476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.608540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.608836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.608900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.609133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.609166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.609300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.609341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.609644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.609710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.609935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.609969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.610143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.610176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.610369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.610434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.610697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.610732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.610904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.610975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.611210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.611243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.611389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.611423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.611630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.611696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.611940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.612006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.612209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.612276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.612592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.612658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.612901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.612966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.613217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.613281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.613592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.613657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.613896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.613961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.614237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.614300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.614573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.614613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.614753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.614788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.614955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.614988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.615089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.615121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.615244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.615277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.615332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbdf30 (9): Bad file descriptor 00:31:35.850 [2024-11-26 18:27:23.615538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.615589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.615797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.615865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.616124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.616187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.850 [2024-11-26 18:27:23.616446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.850 [2024-11-26 18:27:23.616512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.850 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.616717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.616781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.617026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.617089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.617363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.617428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.617717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.617781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.618065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.618141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.618366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.618432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.618685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.618718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.618820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.618853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.618965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.619000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.619243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.619323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.619596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.619659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.619908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.619971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.620242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.620275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.620428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.620461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.620569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.620602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.620784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.620841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.621007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.621039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.621273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.621348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.621662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.621726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.621969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.622032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.622297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.622339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.622473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.622506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.622613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.622646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.622804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.622837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.623106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.623168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.623463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.623527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.623808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.623871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.624074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.624138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.624377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.624441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.624662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.624725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.624951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.625014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.625206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.625279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.625589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.625652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.625869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.625935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.626133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.626196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.626438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.626503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.626748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.626813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.627072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.851 [2024-11-26 18:27:23.627136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.851 qpair failed and we were unable to recover it. 00:31:35.851 [2024-11-26 18:27:23.627354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.627420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.627637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.627700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.627951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.628014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.628299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.628379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.628645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.628678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.628817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.628850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.629049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.629113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.629415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.629479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.629725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.629788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.630030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.630093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.630358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.630422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.630669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.630732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.630989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.631052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.631258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.631354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.631640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.631703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.631943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.632009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.632300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.632387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.632688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.632720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.632833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.632867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.632973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.633055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.633337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.633402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.633657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.633721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.633940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.634005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.634226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.634288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.634524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.634591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.634838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.634901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.635139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.635203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.635512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.635577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.635818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.635881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.636134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.636197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.636511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.636575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.636803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.636836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.636983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.637015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.637227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.637293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.852 qpair failed and we were unable to recover it. 00:31:35.852 [2024-11-26 18:27:23.637529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.852 [2024-11-26 18:27:23.637605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.637904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.637967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.638217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.638280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.638582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.638646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.638881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.638944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.639185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.639248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.639590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.639691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.640016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.640056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.640249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.640288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.640446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.640512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.640768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.640831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.641103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.641136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.641308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.641385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.641676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.641739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.642002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.642065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.642334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.642399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.642697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.642761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.643005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.643071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.643353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.643419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.643702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.643765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.644052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.644115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.644395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.644461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.644705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.644769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.645050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.645112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.645343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.645377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.645515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.645548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.645742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.645804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.646083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.646157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.646425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.646491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.646730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.646793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.647069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.647133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.647352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.647417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.647705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.647768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.648052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.648116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.648404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.648469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.648686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.648752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.649010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.649073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.649327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.649392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.649648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.853 [2024-11-26 18:27:23.649712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.853 qpair failed and we were unable to recover it. 00:31:35.853 [2024-11-26 18:27:23.649937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.649999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.650248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.650346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.650592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.650657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.650901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.650965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.651197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.651260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.651519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.651583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.651798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.651861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.652150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.652182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.652313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.652346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.652484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.652518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.652801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.652865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.653110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.653173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.653463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.653529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.653792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.653855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.654112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.654175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.654468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.654533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.654836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.654899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.655121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.655185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.655434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.655499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.655751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.655814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.656047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.656111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.656386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.656452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.656699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.656762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.657013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.657076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.657334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.657398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.657649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.657713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.657940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.658005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.658280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.658373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.658622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.658685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.658929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.659002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.659284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.659369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.659638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.659701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.659905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.659970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.660250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.660335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.660623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.660686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.660973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.661037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.661334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.661398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.661612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.661675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.661912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.854 [2024-11-26 18:27:23.661976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.854 qpair failed and we were unable to recover it. 00:31:35.854 [2024-11-26 18:27:23.662183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.662245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.662538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.662572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.662744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.662814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.662997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.663060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.663328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.663393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.663635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.663698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.663989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.664051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.664328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.664393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.664652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.664715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.665013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.665076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.665292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.665373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.665604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.665667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.665906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.665969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.666222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.666285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.666562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.666595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.666705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.666738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.666970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.667032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.667269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.667362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.667610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.667673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.667956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.668018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.668298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.668379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.668624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.668689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.668944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.668977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.669119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.669189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.669446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.669480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.669625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.669658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.669859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.669891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.670027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.670060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.670187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.670220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.670446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.670511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.670759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.670823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.671090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.671155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.671447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.671512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.671759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.671823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.672067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.672129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.672344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.672410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.855 qpair failed and we were unable to recover it. 00:31:35.855 [2024-11-26 18:27:23.672656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.855 [2024-11-26 18:27:23.672719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.672948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.673011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.673260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.673338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.673542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.673606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.673893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.673956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.674167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.674232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.674488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.674553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.674838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.674900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.675142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.675206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.675543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.675607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.675856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.675918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.676134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.676197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.676483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.676548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.676803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.676865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.677109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.677176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.677424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.677489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.677775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.677837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.678095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.678159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.678442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.678507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.678756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.678818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.679041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.679104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.679326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.679394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.679644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.679717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.679970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.680033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.680325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.680390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.680671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.680733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.680942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.681005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.681226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.681293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.681561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.681625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.681839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.681905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.682191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.682255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.682532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.682596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.682880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.682943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.856 [2024-11-26 18:27:23.683201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.856 [2024-11-26 18:27:23.683263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.856 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.683567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.683630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.683918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.683982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.684245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.684327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.684581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.684613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.684762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.684795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.685093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.685155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.685414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.685479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.685759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.685822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.686072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.686134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.686378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.686442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.686720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.686753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.686900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.686933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.687203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.687264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.687508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.687572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.687819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.687884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.688162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.688225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.688493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.688560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.688775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.688839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.689089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.689151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.689424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.689489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.689781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.689845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.690092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.690154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.690401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.690435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.690578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.690610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.690824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.690887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.691137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.691201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.691474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.691542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.691794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.691856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.692097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.692160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.692519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.692619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.692977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.693056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.693360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.693435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.693713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.693779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.694073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.694136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.694374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.694407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.694545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.694578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.857 [2024-11-26 18:27:23.694761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.857 [2024-11-26 18:27:23.694824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.857 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.695073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.695135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.695385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.695451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.695732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.695765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.695905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.695940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.696170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.696235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.696494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.696557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.696864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.696927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.697217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.697280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.697588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.697651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.697902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.697935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.698076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.698108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.698264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.698363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.698623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.698687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.698933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.698995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.699231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.699293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.699608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.699672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.699958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.700020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.700233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.700296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.700608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.700672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.700960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.701032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.701293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.701376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.701650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.701714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.701993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.702055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.702378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.702444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.702701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.702767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.703061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.703125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.703411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.703477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.703677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.703743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.703989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.704054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.704259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.704337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.704636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.704699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.704989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.705052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.705274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.705352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.705650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.705713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.705955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.706018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.706265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.706347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.706544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.706607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.706892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.706955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.858 qpair failed and we were unable to recover it. 00:31:35.858 [2024-11-26 18:27:23.707159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.858 [2024-11-26 18:27:23.707224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.707485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.707551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.707803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.707867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.708101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.708163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.708448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.708515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.708720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.708784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.709075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.709139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.709428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.709491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.709706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.709780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.710013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.710076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.711708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.711786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.715318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.715374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.715595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.715625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.715754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.715783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.715887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.715916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.716020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.716047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.716171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.716198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.716329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.716364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.716553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.716580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.716796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.716823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.716921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.716947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.717042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.717069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.717166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.717192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.717317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.717355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.717505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.717533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.717625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.717651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.717769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.717796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.717918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.717945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.718038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.718066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.718215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.718241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.718368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.718395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.718533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.718560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.718669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.718695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.718833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.718859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.718972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.718998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.719090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.719118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.719241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.719266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.719391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.719419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.719521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.859 [2024-11-26 18:27:23.719548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.859 qpair failed and we were unable to recover it. 00:31:35.859 [2024-11-26 18:27:23.719670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.719696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.719817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.719844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.719963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.719990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.720101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.720127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.720213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.720238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.720365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.720391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.720497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.720523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.720638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.720664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.720806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.720832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.720946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.720971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.721110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.721141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.721281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.721322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.721467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.721493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.721618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.721643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.721723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.721748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.721894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.721920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.722045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.722071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.722151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.722176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.722284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.722317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.722456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.722505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.722664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.722689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.722801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.722826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.722941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.722967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.723049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.723075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.723191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.723217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.723325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.723352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.723466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.723491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.723607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.723634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.723747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.723773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.723879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.723905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.723988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.724014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.724117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.724144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.724232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.724257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.724339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.724365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.724478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.724504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.724613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.724638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.724781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.724807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.724897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.860 [2024-11-26 18:27:23.724923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.860 qpair failed and we were unable to recover it. 00:31:35.860 [2024-11-26 18:27:23.725053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.725079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.725172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.725198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.725318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.725344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.725463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.725489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.725604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.725630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.725746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.725771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.725878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.725904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.725991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.726111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.726211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.726316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.726424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.726566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.726687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.726802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.726914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.726939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.727024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.727050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.727128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.727154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.727245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.727270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.727389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.727415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.727493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.727518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.727636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.727662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.727744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.727770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.727906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.727931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.728028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.728053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.728128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.728154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.728257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.728282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.728387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.728412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.728523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.728548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.728636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.728662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.728753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.728778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.728891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.728916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.729009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.729034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.729142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.861 [2024-11-26 18:27:23.729167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.861 qpair failed and we were unable to recover it. 00:31:35.861 [2024-11-26 18:27:23.729252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.729277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.729426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.729452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.729534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.729559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.729631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.729656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.729773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.729798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.729919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.729945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.730033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.730063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.730152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.730177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.730262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.730288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.730419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.730445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.730561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.730586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.730666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.730692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.730786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.730812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.730918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.730944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.731059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.731163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.731278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.731399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.731509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.731619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.731736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.731874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.731989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.732014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.732149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.732174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.732284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.732318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.732409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.732434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.732523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.732548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.732686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.732712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.732790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.732815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.732892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.732917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.733032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.733058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.733138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.733164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.733278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.733311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.733428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.733454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.733545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.733571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.733678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.733703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.733789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.733814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.733923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.862 [2024-11-26 18:27:23.733948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.862 qpair failed and we were unable to recover it. 00:31:35.862 [2024-11-26 18:27:23.734026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.734051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.734209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.734250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.734390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.734420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.734514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.734540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.734682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.734709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.734820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.734846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.734942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.734969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.735051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.735077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.735197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.735224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.735334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.735361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.735529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.735568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.735692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.735729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.735850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.735886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.736062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.736098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.736268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.736311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.736425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.736451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.736535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.736561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.736644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.736669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.736788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.736825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.737034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.737071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.737246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.737280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.737403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.737432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.737547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.737578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.737704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.737750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.737933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.737970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.738118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.738169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.738278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.738321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.738433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.738458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.738567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.738594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.738685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.738710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.738832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.738869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.739087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.739124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.739270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.739317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.739465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.739491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.739608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.739633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.863 [2024-11-26 18:27:23.739712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.863 [2024-11-26 18:27:23.739738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.863 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.739837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.739863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.740047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.740085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.740239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.740277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.740410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.740436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.740572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.740598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.740712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.740763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.740915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.740954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.741071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.741097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.741239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.741276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.741457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.741483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.741583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.741608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.741724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.741749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.741892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.741928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.742056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.742105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.742259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.742296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.742444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.742469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.742559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.742587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.742672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.742699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.742836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.742863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.742977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.743015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.743129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.743165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.743322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.743377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.743486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.743513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.743609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.743635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.743795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.743829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.743972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.744024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.744160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.744202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.744330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.744372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.744486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.744513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.744645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.744670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.744809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.744847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.744969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.745011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.745198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.745223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.745313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.745340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.745458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.745483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.745595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.864 [2024-11-26 18:27:23.745631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.864 qpair failed and we were unable to recover it. 00:31:35.864 [2024-11-26 18:27:23.745742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.745767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.745893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.745933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.746046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.746073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.746171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.746198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.746281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.746317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.746423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.746449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.746562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.746587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.746695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.746720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.746802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.746827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.746931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.746955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.747039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.747064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.747153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.747180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.747285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.747319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.747468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.747494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.747586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.747611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.747718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.747744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.747853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.747878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.748001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.748031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.748120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.748145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.748237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.748262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.748417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.748443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.748554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.748579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.748697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.748722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.748829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.748854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.748970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.748997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.749140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.749166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.749259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.749284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.749381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.749406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.749515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.749540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.749616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.749642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.749764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.749789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.749941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.749966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.750080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.750105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.750183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.750209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.750345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.750372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.750513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.750538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.750675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.750700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.750809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.750835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.750919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.865 [2024-11-26 18:27:23.750944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.865 qpair failed and we were unable to recover it. 00:31:35.865 [2024-11-26 18:27:23.751085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.751111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.751222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.751247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.751402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.751451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.751648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.751694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.751794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.751830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.751965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.751990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.752103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.752128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.752238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.752263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.752385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.752433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.752569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.752618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.752736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.752761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.752865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.752890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.753964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.753989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.754103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.754129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.754212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.754238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.754364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.754390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.754506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.754531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.754609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.754635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.754747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.754773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.754851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.754876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.754965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.754990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.755130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.755156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.755242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.755268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.755392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.755418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.755522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.755547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.866 [2024-11-26 18:27:23.755665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.866 [2024-11-26 18:27:23.755690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.866 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.755799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.755825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.755910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.755935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.756056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.756082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.756194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.756220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.756361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.756387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.756465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.756490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.756584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.756611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.756700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.756725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.756807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.756832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.756939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.756964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.757071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.757096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.757182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.757206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.757315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.757359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.757469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.757494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.757604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.757629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.757708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.757733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.757868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.757893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.758015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.758040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.758154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.758178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.758295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.758328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.758420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.758447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.758531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.758556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.758645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.758670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.758779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.758804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.758942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.758967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.759091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.759201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.759339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.759451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.759584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.759688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.759785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.759910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.759991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.760017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.760103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.760128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.760238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.760264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.760389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.760415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.760509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.760535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.867 qpair failed and we were unable to recover it. 00:31:35.867 [2024-11-26 18:27:23.760653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.867 [2024-11-26 18:27:23.760678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.760781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.760807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.760908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.760933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.761031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.761056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.761169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.761195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.761279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.761313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.761392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.761417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.761535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.761570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.761705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.761730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.761806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.761831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.761941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.761966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.762072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.762097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.762215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.762240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.762360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.762386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.762473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.762500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.762621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.762651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.762734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.762759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.762872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.762897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.762990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.763016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.763120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.763146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.763225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.763251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.763335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.763363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.763451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.763476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.763598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.763625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.763734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.763760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.763885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.763910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.764051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.764077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.764191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.764217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.764298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.764332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.764412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.764438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.764550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.764575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.764686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.764711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.764817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.764842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.764990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.765015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.765150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.765175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.765253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.765279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.765435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.765461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.765614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.765662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.868 [2024-11-26 18:27:23.765823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.868 [2024-11-26 18:27:23.765867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.868 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.765952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.765978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.766098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.766123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.766207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.766232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.766348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.766379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.766507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.766535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.766663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.766689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.766797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.766822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.766955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.766980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.767071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.767096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.767191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.767216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.767330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.767357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.767469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.767495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.767614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.767639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.767751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.767777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.767860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.767885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.767967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.767992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.768127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.768152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.768279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.768327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.768435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.768463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.768623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.768649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.768775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.768802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.768916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.768942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.769022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.769048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.769187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.769213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.769331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.769357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.769485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.769527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.769675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.769703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.769860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.769901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.769983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.770008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.770093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.770119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.770233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.770259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.770377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.770405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.770548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.770590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.770717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.770742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.770881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.770906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.770990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.771016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.771098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.771124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.771236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.771262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.869 qpair failed and we were unable to recover it. 00:31:35.869 [2024-11-26 18:27:23.771374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.869 [2024-11-26 18:27:23.771401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.771513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.771538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.771627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.771653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.771747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.771772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.771888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.771913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.772024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.772049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.772178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.772218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.772349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.772376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.772493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.772519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.772648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.772674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.772790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.772816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.772960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.772986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.773103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.773130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.773273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.773298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.773431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.773456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.773546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.773571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.773653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.773678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.773831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.773873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.773972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.773999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.774150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.774175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.774292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.774325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.774440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.774466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.774627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.774652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.774801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.774841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.774954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.774980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.775103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.775128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.775216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.775241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.775329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.775355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.775461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.775486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.775592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.775617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.775692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.775718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.775802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.775827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.775904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.775929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.776033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.776062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.776156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.870 [2024-11-26 18:27:23.776181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.870 qpair failed and we were unable to recover it. 00:31:35.870 [2024-11-26 18:27:23.776269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.776294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.776441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.776467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.776555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.776580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.776677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.776704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.776816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.776842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.776922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.776947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.777061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.777086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.777197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.777222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.777372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.777397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.777514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.777541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.777657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.777682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.777774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.777800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.777944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.777970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.778055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.778080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.778189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.778214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.778298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.778330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.778442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.778467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.778584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.778610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.778692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.778719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.778798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.778824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.778940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.778966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.779112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.779137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.779224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.779249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.779354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.779380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.779472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.779497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.779603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.779634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.779746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.779773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.779883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.779916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.779996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.780104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.780234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.780343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.780462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.780575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.780687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.780795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.780902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.780927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.781019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.781046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.781182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.781207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.781320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.781381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.781500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.781528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.781648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.871 [2024-11-26 18:27:23.781674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.871 qpair failed and we were unable to recover it. 00:31:35.871 [2024-11-26 18:27:23.781790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.781816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.781931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.781957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.782043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.782069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.782212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.782237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.782333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.782361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.782445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.782470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.782550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.782576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.782659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.782685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.782795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.782820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.782907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.782937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.783045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.783090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.783234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.783261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.783396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.783424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.783542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.783569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.783684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.783709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.783826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.783856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.784035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.784138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.784264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.784383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.784490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.784636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.784750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.784887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.784977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.785003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.785120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.785147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.785288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.785323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.785438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.785463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.785549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.785574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.785686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.785711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.785795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.785820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.785930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.785955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.786092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.786119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.786256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.786280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.786379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.786407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.786494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.786519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.786630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.786656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.786736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.786766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.786876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.786901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.786988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.787013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.787132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.872 [2024-11-26 18:27:23.787160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.872 qpair failed and we were unable to recover it. 00:31:35.872 [2024-11-26 18:27:23.787274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.787298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.787394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.787419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.787504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.787529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.787624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.787650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.787758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.787783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.787879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.787905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.787988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.788118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.788223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.788333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.788443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.788616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.788721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.788832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.788939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.788964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.789072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.789096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.789208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.789233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.789349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.789377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.789457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.789482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.789603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.789628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.789763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.789788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.789893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.789918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.790008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.790037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.790156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.790188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.790272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.790297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.790391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.790416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.790525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.790549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.790628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.790654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.790738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.790765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.790902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.790929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.791013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.791038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.791158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.791183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.791289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.791322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.791407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.791433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.791522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.791547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.791631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.791658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.791770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.791795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.791890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.791916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.792003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.792030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.792109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.792134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.792250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.792277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.873 qpair failed and we were unable to recover it. 00:31:35.873 [2024-11-26 18:27:23.792412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.873 [2024-11-26 18:27:23.792441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.792556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.792591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.792677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.792702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.792844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.792870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.792954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.792979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.793068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.793092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.793219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.793257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.793392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.793422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.793514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.793541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.793643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.793670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.793809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.793835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.793955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.793983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.794075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.794102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.794219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.794244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.794363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.794390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.794504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.794529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.794655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.794681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.794807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.794833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.794914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.794939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.795053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.795077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.795184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.795210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.795330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.795356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.795449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.795480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.795570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.795595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.795712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.795738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.795828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.795854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.795963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.795989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.796076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.796101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.796209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.796234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.796313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.796339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.796420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.796445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.796523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.796548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.796642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.796669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.796762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.796787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.796899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.796925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.797037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.797063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.797183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.797209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.797324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.797349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.797458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.797483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.874 qpair failed and we were unable to recover it. 00:31:35.874 [2024-11-26 18:27:23.797580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.874 [2024-11-26 18:27:23.797606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.797746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.797771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.797880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.797905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.797979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.798004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.798094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.798119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.798199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.798226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.798316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.798342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.798459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.798484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.798592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.798617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.798705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.798732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.798868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.798907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.799018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.799050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.799172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.799212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.799336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.799368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.799482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.799508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.799627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.799653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.799731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.799756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.799869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.799895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.799975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.800001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.800116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.800143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.800307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.800353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.800441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.800468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.800565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.800591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.800673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.800698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.800780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.800806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.800890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.800915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.801030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.801059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.801142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.801167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.801256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.801282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.801422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.801448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.801567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.801603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.801684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.801711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.801854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.801880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.801999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.802026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.802186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.802225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.802360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.802389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.802508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.802534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.802648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.802674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.802763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.802790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.802931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.802956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.803037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.803063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.803150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.803174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.803287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.803320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.875 [2024-11-26 18:27:23.803416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.875 [2024-11-26 18:27:23.803440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.875 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.803520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.803547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.803667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.803693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.803783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.803809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.803924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.803949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.804058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.804084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.804172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.804197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.804280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.804333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.804449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.804474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.804584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.804609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.804759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.804785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.804876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.804900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.805014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.805040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.805121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.805146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.805219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.805244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.805337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.805363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.805450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.805476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.805615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.805640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.805751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.805777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.805891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.805916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.806035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.806060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.806181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.806206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.806324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.806350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.806470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.806496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.806606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.806631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.806747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.806773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.806884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.806909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.807024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.807050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.807139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.807163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.807257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.807282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.807379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.807404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.807519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.807543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.807657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.807683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.807788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.807812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.807903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.807931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.808047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.808072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.808191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.808217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.808328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.808353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.808436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.808461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.808562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.808600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.808714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.808741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.808860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.808887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.808977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.809003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.809122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.809148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.809231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.876 [2024-11-26 18:27:23.809257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.876 qpair failed and we were unable to recover it. 00:31:35.876 [2024-11-26 18:27:23.809378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.809405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.809522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.809550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.809644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.809676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.809789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.809816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.809897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.809921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.810010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.810035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.810144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.810168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.810257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.810283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.810398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.810424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.810532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.810558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.810638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.810663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.810736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.810761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.810876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.810903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.811004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.811030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.811113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.811140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.811281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.811313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.811399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.811425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.811536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.811573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.811663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.811691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.811835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.811860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.811957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.811982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.812092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.812118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.812228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.812252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.812366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.812391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.812476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.812502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.812584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.812609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.812720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.812746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.812856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.812880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.812991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.813017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.813130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.813154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.813295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.813327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.813464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.813489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.813585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.813610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.813745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.813771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.813879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.813904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.813991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.814019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.814105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.814131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.814247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.814273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.814423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.814448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.814540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.814566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.814678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.814702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.814834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.814859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.814971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.815001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.815141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.815165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.877 [2024-11-26 18:27:23.815321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.877 [2024-11-26 18:27:23.815347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.877 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.815497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.815523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.815639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.815664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.815753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.815778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.815866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.815891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.816005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.816030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.816136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.816161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.816271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.816297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.816391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.816416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.816498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.816523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.816669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.816694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.816780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.816805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.816927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.816952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.817036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.817061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.817197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.817222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.817307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.817333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.817420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.817444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.817530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.817555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.817661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.817687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.817792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.817817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.817921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.817947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.818051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.818076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.818191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.818217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.818336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.818361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.818448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.818474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.818586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.818625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.818724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.818752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.818838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.818865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.818978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.819005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.819129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.819156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.819272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.819298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.819416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.819444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.819529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.819553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.819650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.819677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.819784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.819810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.819919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.819945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.820027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.820051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.820138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.820166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.820281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.820319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.820440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.820466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.820559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.878 [2024-11-26 18:27:23.820595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.878 qpair failed and we were unable to recover it. 00:31:35.878 [2024-11-26 18:27:23.820710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.820736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.820816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.820842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.820953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.820979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.821089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.821115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.821234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.821260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.821362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.821389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.821503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.821529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.821621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.821647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.821782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.821808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.821891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.821916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.822006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.822030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.822118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.822144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.822256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.822281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.822404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.822430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.822540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.822565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.822676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.822702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.822845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.822870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.822949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.822974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.823091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.823116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.823207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.823232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.823325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.823357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.823468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.823494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.823614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.823640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.823784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.823809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.823928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.823954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.824041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.824067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.824181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.824208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.824358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.824383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.824465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.824490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.824598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.824623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.824759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.824784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.824868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.824893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.825002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.825027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.825141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.825167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.825279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.825309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.825432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.825458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.825536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.825563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.825710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.825741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.825866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.825890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.826011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.826036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.826149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.826175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.826262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.826287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.826388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.826418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.826501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.826526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.826614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.826638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.826722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.879 [2024-11-26 18:27:23.826747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.879 qpair failed and we were unable to recover it. 00:31:35.879 [2024-11-26 18:27:23.826860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.826886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.826963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.826988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.827104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.827130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.827205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.827230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.827379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.827405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.827488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.827514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.827611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.827637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.827747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.827772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.827853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.827879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.827957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.827982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.828086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.828124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.828286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.828336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.828456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.828484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.828573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.828599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.828717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.828742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.828821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.828847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.828955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.828981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.829125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.829150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.829234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.829265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.829359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.829386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:35.880 [2024-11-26 18:27:23.829470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:35.880 [2024-11-26 18:27:23.829495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:35.880 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.829636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.829662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.829752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.829778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.829864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.829890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.829966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.829991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.830105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.830130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.830232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.830262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.830368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.830396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.830480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.830507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.830597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.830625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.830768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.830794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.830878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.830904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.831018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.831046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.831159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.831185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.831297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.831333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.831424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.831451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.831564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.831589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.831670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.831695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.831806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.831831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.831944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.831969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.832050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.832075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.832164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.832192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.832310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.832336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.832446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.832472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.832563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.832599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.832714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.832745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.832863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.832888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.833012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.176 [2024-11-26 18:27:23.833037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.176 qpair failed and we were unable to recover it. 00:31:36.176 [2024-11-26 18:27:23.833151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.833178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.833328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.833353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.833440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.833467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.833554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.833583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.833665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.833690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.833832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.833857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.833975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.834000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.834085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.834110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.834217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.834242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.834384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.834410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.834493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.834518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.834644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.834669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.834806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.834831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.834944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.834971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.835056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.835081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.835192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.835217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.835329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.835355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.835453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.835478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.835566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.835591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.835702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.835727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.835836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.835861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.835950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.835976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.836068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.836097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.836242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.836268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.836384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.836413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.836501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.836527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.836673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.836699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.836778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.836804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.836883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.836909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.837047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.837074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.837165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.837191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.837281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.837329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.837454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.837480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.837567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.837593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.837680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.837705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.837816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.837841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.837952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.837977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.838085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.838114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.838222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.177 [2024-11-26 18:27:23.838248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.177 qpair failed and we were unable to recover it. 00:31:36.177 [2024-11-26 18:27:23.838365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.838392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.838505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.838532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.838645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.838670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.838755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.838779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.838868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.838893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.839004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.839030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.839122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.839147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.839226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.839254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.839341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.839368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.839513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.839539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.839658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.839684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.839795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.839821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.839939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.839965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.840046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.840071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.840151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.840177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.840292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.840327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.840441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.840469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.840585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.840624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.840740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.840767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.840851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.840876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.840988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.841015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.841099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.841125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.841239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.841266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.841367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.841395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.841477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.841504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.841625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.841651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.841793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.841819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.841904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.841930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.842009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.842035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.842118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.842144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.842232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.842259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.842410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.842584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.842611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.842719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.842744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.842831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.842858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.842942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.842968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.843050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.843075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.843157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.843182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.843272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.843310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.843418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.843444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.843551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.178 [2024-11-26 18:27:23.843580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.178 qpair failed and we were unable to recover it. 00:31:36.178 [2024-11-26 18:27:23.843692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.843718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.843827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.843852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.843933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.843958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.844068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.844093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.844198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.844223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.844310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.844338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.844426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.844451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.844558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.844589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.844678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.844704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.844828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.844856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.844969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.844994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.845081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.845106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.845216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.845242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.845332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.845358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.845442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.845467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.845549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.845583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.845692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.845717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.845832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.845857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.845967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.845992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.846130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.846156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.846249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.846288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.846391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.846418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.846505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.846531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.846638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.846663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.846745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.846775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.846894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.846919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.847005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.847031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.847115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.847140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.847246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.847395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.847420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.847569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.847595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.847706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.847730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.847840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.847864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.847986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.848016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.848133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.848160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.848250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.848277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.848372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.848398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.848473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.848498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.848597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.848623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.848703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.848729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.848814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.179 [2024-11-26 18:27:23.848839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.179 qpair failed and we were unable to recover it. 00:31:36.179 [2024-11-26 18:27:23.848915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.848940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.849022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.849047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.849131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.849156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.849263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.849288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.849445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.849470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.849553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.849579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.849689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.849714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.849794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.849819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.849927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.849952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.850054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.850094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.850221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.850265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.850371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.850398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.850504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.850530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.850657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.850682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.850792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.850817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.850928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.850953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.851100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.851128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.851234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.851259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.851381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.851408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.851494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.851520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.851607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.851632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.851742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.851768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.851857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.851882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.851997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.852107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.852216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.852341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.852444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.852551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.852657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.852794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.852895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.852920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.853002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.853027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.853130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.853155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.853277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.853312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.853435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.853464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.180 qpair failed and we were unable to recover it. 00:31:36.180 [2024-11-26 18:27:23.853558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.180 [2024-11-26 18:27:23.853588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.853679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.853710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.853797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.853825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.853936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.853962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.854083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.854109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.854194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.854219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.854335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.854361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.854445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.854470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.854556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.854582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.854715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.854740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.854834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.854861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.854947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.854973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.855056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.855082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.855170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.855195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.855335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.855361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.855457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.855485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.855593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.855620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.855706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.855731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.855836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.855862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.855951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.855976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.856048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.856074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.856153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.856178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.856284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.856317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.856459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.856484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.856576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.856601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.856749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.856774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.856888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.856914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.857036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.857062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.857152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.857181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.857295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.857334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.857454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.857480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.857632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.857658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.857746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.857772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.857897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.857923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.858036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.858063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.858147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.858174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.858285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.858317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.858410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.858435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.858522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.858547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.858633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.858658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.858771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.858798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.181 qpair failed and we were unable to recover it. 00:31:36.181 [2024-11-26 18:27:23.858891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.181 [2024-11-26 18:27:23.858917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.859036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.859064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.859150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.859177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.859314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.859339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.859454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.859479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.859603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.859630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.859772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.859797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.859885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.859912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.860052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.860078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.860165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.860192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.860307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.860334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.860455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.860481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.860561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.860586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.860674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.860702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.860819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.860847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.860938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.860964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.861114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.861141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.861227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.861253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.861345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.861371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.861483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.861510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.861617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.861642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.861747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.861772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.861853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.861880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.862002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.862030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.862120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.862147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.862258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.862284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.862407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.862433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.862520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.862553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.862662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.862688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.862805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.862831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.862907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.862933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.863030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.863069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.863185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.863213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.863335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.863361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.863445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.863470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.863558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.863583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.863692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.863719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.863796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.863821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.863911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.863938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.864021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.864047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.864125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.864151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.864312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.182 [2024-11-26 18:27:23.864339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.182 qpair failed and we were unable to recover it. 00:31:36.182 [2024-11-26 18:27:23.864423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.864448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.864536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.864562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.864648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.864673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.864778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.864804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.864943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.864968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.865079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.865104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.865212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.865237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.865350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.865379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.865471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.865498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.865581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.865606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.865691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.865718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.865809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.865836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.865954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.865986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.866072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.866098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.866203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.866229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.866379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.866405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.866487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.866512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.866624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.866649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.866732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.866757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.866842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.866867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.866955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.866980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.867114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.867139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.867220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.867245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.867387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.867426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.867521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.867549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.867673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.867698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.867842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.867868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.867960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.867985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.868069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.868094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.868183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.868209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.868320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.868346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.868474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.868512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.868635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.868663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.868780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.868806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.868892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.868918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.869002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.869029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.869136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.869162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.869270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.869296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.869419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.869444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.869523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.869553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.869643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.183 [2024-11-26 18:27:23.869667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.183 qpair failed and we were unable to recover it. 00:31:36.183 [2024-11-26 18:27:23.869781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.869806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.869918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.869943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.870084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.870110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.870220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.870247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.870361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.870388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.870486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.870514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.870651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.870677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.870788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.870813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.870932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.870958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.871066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.871092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.871178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.871203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.871288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.871323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.871448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.871475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.871591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.871618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.871723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.871749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.871838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.871865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.871977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.872003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.872144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.872170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.872333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.872383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.872476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.872504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.872620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.872647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.872762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.872788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.872923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.872948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.873033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.873059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.873140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.873166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.873279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.873324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.873416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.873446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.873586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.873611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.873727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.873752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.873861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.873886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.873973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.873999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.874083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.874108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.874182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.874207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.874311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.874351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.874476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.874505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.874626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.874654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.874775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.184 [2024-11-26 18:27:23.874802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.184 qpair failed and we were unable to recover it. 00:31:36.184 [2024-11-26 18:27:23.874892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.874918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.875001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.875027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.875121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.875149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.875236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.875261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.875387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.875426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.875525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.875553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.875670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.875697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.875837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.875863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.875985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.876013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.876126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.876152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.876288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.876322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.876459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.876485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.876586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.876612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.876751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.876777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.876889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.876915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.877067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.877103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.877231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.877269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.877398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.877426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.877560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.877586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.877726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.877752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.877867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.877892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.878007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.878035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.878129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.878155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.878244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.878270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.878372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.878400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.878513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.878539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.878618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.878644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.878764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.878790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.878877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.878904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.879047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.879073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.879159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.879185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.879296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.879327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.879467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.879492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.879599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.879625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.879735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.879761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.879869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.879895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.880011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.880037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.880124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.880150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.880245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.880285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.880394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.185 [2024-11-26 18:27:23.880421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.185 qpair failed and we were unable to recover it. 00:31:36.185 [2024-11-26 18:27:23.880549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.880576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.880674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.880699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.880820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.880845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.880960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.880985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.881123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.881150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.881248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.881287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.881425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.881452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.881540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.881565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.881668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.881694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.881802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.881826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.881919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.881946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.882026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.882054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.882158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.882184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.882266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.882294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.882430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.882455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.882538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.882568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.882678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.882703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.882791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.882818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.882925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.882950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.883031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.883056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.883138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.883163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.883246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.883271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.883438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.883467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.883551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.883578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.883667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.883693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.883781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.883808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.883951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.883979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.884066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.884091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.884208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.884234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.884401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.884427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.884508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.884533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.884641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.884667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.884750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.884775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.884887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.884912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.884998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.885026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.885143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.885170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.885290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.885324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.885420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.885446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.885532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.885559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.885698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.885724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.885838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.885865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.186 [2024-11-26 18:27:23.886011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.186 [2024-11-26 18:27:23.886037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.186 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.886148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.886178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.886260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.886285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.886408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.886434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.886513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.886539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.886634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.886660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.886746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.886773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.886890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.886916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.887060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.887086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.887167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.887193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.887270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.887296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.887448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.887473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.887585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.887610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.887747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.887772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.887857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.887883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.887966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.887992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.888072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.888097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.888210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.888235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.888349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.888376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.888457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.888482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.888560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.888585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.888689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.888714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.888820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.888845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.888934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.888959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.889067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.889092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.889175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.889200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.889299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.889343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.889472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.889500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.889640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.889671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.889752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.889778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.889887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.889912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.889986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.890012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.890119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.890145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.890252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.890278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.890393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.890420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.890533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.890561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.890673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.890699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.890783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.890809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.890959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.890993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.891116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.891142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.891225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.891251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.187 [2024-11-26 18:27:23.891348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.187 [2024-11-26 18:27:23.891376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.187 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.891470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.891496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.891636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.891661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.891743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.891769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.891900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.891938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.892038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.892064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.892151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.892182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.892263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.892290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.892416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.892443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.892527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.892552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.892665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.892691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.892828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.892852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.892983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.893011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.893127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.893154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.893264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.893299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.893438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.893464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.893549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.893574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.893651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.893676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.893768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.893793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.893931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.893957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.894092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.894116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.894227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.894253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.894394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.894421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.894509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.894534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.894620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.894645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.894760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.894785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.894891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.894916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.895063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.895093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.895182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.895208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.895296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.895328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.895415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.895440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.895536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.895574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.895692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.895718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.895837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.895863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.895945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.895971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.896102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.896142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.896277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.896311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.896431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.896458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.896597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.896623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.896712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.896738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.896823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.896851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.896955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.188 [2024-11-26 18:27:23.896983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.188 qpair failed and we were unable to recover it. 00:31:36.188 [2024-11-26 18:27:23.897122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.897148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.897261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.897287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.897438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.897463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.897554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.897579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.897655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.897680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.897771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.897796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.897914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.897939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.898028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.898054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.898140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.898166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.898247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.898272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.898366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.898392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.898505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.898531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.898642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.898675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.898790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.898815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.898893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.898919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.899005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.899031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.899167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.899192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.899271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.899297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.899399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.899425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.899514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.899539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.899657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.899682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.899794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.899819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.899961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.899987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.900098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.900123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.900203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.900228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.900331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.900370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.900517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.900556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.900650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.900677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.900790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.900817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.900908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.900935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.901043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.901069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.901148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.901174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.901287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.901318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.901431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.901457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.901542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.901566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.901678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.901703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.189 qpair failed and we were unable to recover it. 00:31:36.189 [2024-11-26 18:27:23.901812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.189 [2024-11-26 18:27:23.901837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.901946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.901972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.902073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.902113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.902237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.902271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.902366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.902393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.902523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.902550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.902629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.902655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.902771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.902798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.902912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.902938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.903051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.903077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.903183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.903209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.903328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.903358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.903455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.903481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.903563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.903589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.903676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.903702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.903812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.903839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.903949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.903974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.904071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.904097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.904178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.904204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.904311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.904350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.904442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.904469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.904558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.904584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.904690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.904715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.904833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.904860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.904947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.904973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.905090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.905116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.905195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.905219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.905315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.905342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.905446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.905471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.905560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.905586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.905738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.905772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.905930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.905958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.906082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.906108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.906200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.906226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.906363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.906390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.906485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.906511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.906608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.906635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.906724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.906749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.906860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.906886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.906970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.906995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.907129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.907155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.190 qpair failed and we were unable to recover it. 00:31:36.190 [2024-11-26 18:27:23.907256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.190 [2024-11-26 18:27:23.907295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.907432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.907459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.907552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.907587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.907694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.907720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.907841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.907867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.907985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.908011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.908095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.908121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.908265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.908291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.908393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.908419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.908535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.908561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.908642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.908668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.908777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.908803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.908914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.908939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.909029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.909057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.909195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.909221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.909341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.909368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.909502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.909540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.909662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.909689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.909805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.909831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.909969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.909995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.910082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.910108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.910218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.910243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.910374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.910400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.910490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.910516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.910645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.910684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.910772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.910800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.910921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.910948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.911060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.911088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.911188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.911214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.911332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.911363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.911479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.911505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.911624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.911653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.911776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.911802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.911889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.911915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.912020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.912045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.912128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.912154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.912246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.912273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.912371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.912399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.912489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.912515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.912604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.912631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.912742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.912768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.191 [2024-11-26 18:27:23.912880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.191 [2024-11-26 18:27:23.912906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.191 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.913045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.913071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.913187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.913213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.913331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.913358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.913448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.913474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.913593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.913620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.913703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.913729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.913819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.913845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.913918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.913945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.914024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.914050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.914124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.914150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.914317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.914355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.914448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.914474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.914561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.914587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.914677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.914702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.914823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.914856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.914970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.915000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.915135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.915162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.915273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.915300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.915394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.915419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.915503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.915530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.915672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.915698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.915838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.915865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.915947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.915973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.916087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.916113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.916218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.916258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.916412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.916439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.916520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.916545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.916643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.916676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.916791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.916816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.916933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.916959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.917103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.917129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.917222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.917248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.917335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.917362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.917447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.917473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.917558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.917583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.917717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.917743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.917825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.917851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.917967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.917994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.918108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.918134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.918243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.918271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.918365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.192 [2024-11-26 18:27:23.918391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.192 qpair failed and we were unable to recover it. 00:31:36.192 [2024-11-26 18:27:23.918511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.918537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.918620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.918645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.918739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.918765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.918852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.918877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.918993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.919020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.919140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.919166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.919309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.919336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.919426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.919452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.919546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.919573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.919686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.919713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.919826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.919854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.919962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.919988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.920066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.920092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.920177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.920205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.920321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.920347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.920439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.920465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.920576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.920601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.920683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.920709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.920788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.920813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.920905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.920931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.921039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.921064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.921177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.921203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.921289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.921322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.921441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.921468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.921580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.921606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.921692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.921719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.921809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.921840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.921924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.921950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.922071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.922099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.922240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.922266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.922426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.922451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.922595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.922621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.922708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.922732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.922814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.922841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.922958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.922983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.923119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.923145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.923228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.923253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.193 [2024-11-26 18:27:23.923334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.193 [2024-11-26 18:27:23.923359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.193 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.923470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.923494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.923608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.923633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.923756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.923781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.923892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.923917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.923998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.924023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.924165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.924191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.924296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.924327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.924481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.924506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.924619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.924646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.924742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.924767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.924849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.924875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.924968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.924995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.925109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.925135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.925221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.925246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.925371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.925410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.925506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.925539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.925640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.925669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.925797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.925824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.925959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.925984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.926101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.926126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.926237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.926262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.926419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.926458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.926558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.926698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.926725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.926815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.926842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.926985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.927012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.927123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.927149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.927231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.927257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.927378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.927411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.927550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.927577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.927684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.927710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.927794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.927822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.927909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.927936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.928030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.928058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.928169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.928194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.928312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.928339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.928426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.928451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.928536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.928561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.928673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.928698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.928841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.194 [2024-11-26 18:27:23.928869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.194 qpair failed and we were unable to recover it. 00:31:36.194 [2024-11-26 18:27:23.928961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.928988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.929101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.929127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.929244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.929270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.929390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.929417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.929503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.929528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.929640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.929666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.929746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.929772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.929885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.929912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.930026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.930053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.930183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.930208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.930348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.930374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.930466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.930494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.930578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.930603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.930718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.930744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.930856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.930881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.930991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.931022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.931136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.931162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.931277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.931312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.931406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.931432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.931513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.931539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.931650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.931676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.931765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.931790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.931875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.931901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.931995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.932023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.932138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.932164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.932297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.932330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.932446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.932471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.932606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.932644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.932731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.932758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.932877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.932905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.932991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.933098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.933198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.933312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.933454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.933562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.933696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.933799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.933906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.933931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.934042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.934069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.934180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.934207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.195 qpair failed and we were unable to recover it. 00:31:36.195 [2024-11-26 18:27:23.934329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.195 [2024-11-26 18:27:23.934355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.934442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.934468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.934550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.934576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.934684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.934710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.934789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.934815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.934899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.934924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.935003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.935029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.935137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.935163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.935273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.935299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.935397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.935422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.935533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.935558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.935640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.935665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.935753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.935778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.935888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.935913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.936947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.936974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.937956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.937982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.938096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.938120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.938227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.938253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.938341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.938367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.938481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.938507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.938585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.938609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.938698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.938723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.938862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.938886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.938981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.939007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.939119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.939144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.196 [2024-11-26 18:27:23.939235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.196 [2024-11-26 18:27:23.939261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.196 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.939359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.939385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.939500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.939525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.939617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.939643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.939731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.939757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.939847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.939872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.939987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.940012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.940096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.940121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.940257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.940282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.940388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.940422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.940531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.940562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.940689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.940718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.940857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.940884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.940998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.941028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.941139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.941164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.941282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.941314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.941407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.941432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.941511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.941536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.941625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.941651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.941764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.941789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.941873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.941897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.942041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.942066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.942168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.942206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.942289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.942325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.942442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.942468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.942558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.942584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.942692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.942717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.942807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.942833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.942945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.942970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.943054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.943080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.943155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.943180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.943317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.943344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.943459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.943484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.943600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.943624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.197 [2024-11-26 18:27:23.943741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.197 [2024-11-26 18:27:23.943769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.197 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.943852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.943877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.944015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.944041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.944153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.944178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.944263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.944289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.944403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.944427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.944515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.944547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.944630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.944655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.944764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.944790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.944907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.944933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.945075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.945101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.945187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.945211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.945322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.945348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.945459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.945484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.945595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.945620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.945728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.945753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.945893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.945918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.946001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.946026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.946113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.946141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.946225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.946250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.946372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.946399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.946487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.946512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.946649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.946674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.946760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.946785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.946919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.946945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.947069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.947108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.947201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.947229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.947342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.947369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.947454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.947479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.947595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.947620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.947707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.947733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.947848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.947876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.947967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.947992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.948096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.948135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.948255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.948281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.948414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.948447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.948577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.948603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.948720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.948744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.198 qpair failed and we were unable to recover it. 00:31:36.198 [2024-11-26 18:27:23.948851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.198 [2024-11-26 18:27:23.948876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.948982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.949007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.949140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.949180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.949293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.949324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.949418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.949444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.949554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.949579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.949661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.949686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.949775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.949803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.949923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.949950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.950075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.950115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.950262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.950289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.950424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.950451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.950538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.950564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.950648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.950674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.950757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.950783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.950894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.950919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.951028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.951053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.951163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.951188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.951269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.951294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.951393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.951419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.951533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.951558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.951645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.951670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.951767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.951798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.951928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.951956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.952087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.952134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.952232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.952258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.952370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.952398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 724196 Killed "${NVMF_APP[@]}" "$@" 00:31:36.199 [2024-11-26 18:27:23.952518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.952545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.952660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.952685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.952825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.952850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.952967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.952993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.953086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.953114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:31:36.199 [2024-11-26 18:27:23.953195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.953222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.953312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.953339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.953455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:36.199 [2024-11-26 18:27:23.953481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.953570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.953595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 [2024-11-26 18:27:23.953681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.199 [2024-11-26 18:27:23.953708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.199 qpair failed and we were unable to recover it. 00:31:36.199 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.199 [2024-11-26 18:27:23.953821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.953846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.953936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.953961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.954042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.200 [2024-11-26 18:27:23.954068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.954161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.954188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.954282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.954314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.200 [2024-11-26 18:27:23.954433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.954459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.954574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.954600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.954686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.954712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.954860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.954886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.954999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.955033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.955150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.955175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.955292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.955333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.955432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.955459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.955553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.955579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.955696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.955721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.955809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.955834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.955914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.955939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.956047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.956073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.956162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.956187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.956294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.956328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.956415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.956442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.956552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.956581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.956668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.956695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.956814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.956842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.956957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.956983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.957073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.957099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.957218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.957243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.957341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.957368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.957485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.957509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.957587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.957613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.957725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.957750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.957862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.957888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.957973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.957999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.958134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.958173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.958314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.958343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.958429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.200 [2024-11-26 18:27:23.958455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.200 qpair failed and we were unable to recover it. 00:31:36.200 [2024-11-26 18:27:23.958575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.958601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.958715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.958743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=724750 00:31:36.201 [2024-11-26 18:27:23.958851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.958878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 724750 00:31:36.201 [2024-11-26 18:27:23.958989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.959015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.959120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.959145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 724750 ']' 00:31:36.201 [2024-11-26 18:27:23.959296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.959349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.201 [2024-11-26 18:27:23.959471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.959499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.201 [2024-11-26 18:27:23.959621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.959649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.201 [2024-11-26 18:27:23.959759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.959786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.201 [2024-11-26 18:27:23.959892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.959927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.960021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.960048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.201 18:27:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.960142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.960168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.960279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.960318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.960457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.960481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.960564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.960589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.960707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.960733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.960844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.960869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.960980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.961006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.961122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.961148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.961256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.961282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.961380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.961405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.961489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.961516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.961632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.961657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.961771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.961797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.961911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.961937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.962078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.962105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.962188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.962214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.962298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.962332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.962418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.962444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.962541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.962567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.962650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.962676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.962785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.962810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.962900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.201 [2024-11-26 18:27:23.962927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.201 qpair failed and we were unable to recover it. 00:31:36.201 [2024-11-26 18:27:23.963039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.963065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.963155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.963181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.963292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.963330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.963421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.963447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.963555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.963581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.963696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.963722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.963816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.963842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.963955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.963980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.964073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.964112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.964235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.964262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.964390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.964418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.964557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.964583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.964700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.964726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.964805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.964831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.964946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.964974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.965119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.965144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.965287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.965318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.965408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.965434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.965514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.965540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.965649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.965674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.965813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.965840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.965924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.965950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.966036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.966062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.966147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.966174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.966257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.966285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.966439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.966466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.966578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.966604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.966743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.966770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.966853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.966879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.967001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.967027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.967141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.967168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.967315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.967343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.967429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.967456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.202 [2024-11-26 18:27:23.967573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.202 [2024-11-26 18:27:23.967600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.202 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.967715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.967740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.967849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.967875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.967958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.967983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.968074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.968100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.968194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.968222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.968339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.968366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.968459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.968486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.968571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.968597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.968677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.968708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.968815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.968841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.968954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.968981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.969072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.969100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.969215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.969241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.969362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.969390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.969469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.969495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.969634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.969659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.969767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.969792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.969896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.969922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.970006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.970031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.970123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.970150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.970274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.970319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.970411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.970438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.970563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.970589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.970676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.970703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.970789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.970817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.970938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.970964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.971079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.971104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.971218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.971244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.971355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.971383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.971497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.971522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.971604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.971630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.971750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.971776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.971866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.971891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.972004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.972030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.972119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.972144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.972300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.972354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.972455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.203 [2024-11-26 18:27:23.972483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.203 qpair failed and we were unable to recover it. 00:31:36.203 [2024-11-26 18:27:23.972590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.972616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.972732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.972759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.972841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.972867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.972967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.972994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.973113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.973138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.973251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.973276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.973395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.973421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.973501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.973526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.973668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.973694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.973833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.973858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.973941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.973968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.974059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.974085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.974229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.974254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.974328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.974355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.974465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.974491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.974602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.974627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.974716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.974878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.974904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.975018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.975043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.975130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.975156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.975265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.975291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.975437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.975462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.975585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.975610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.975720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.975746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.975883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.975908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.976023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.976053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.976159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.976185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.976296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.976342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.976460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.976488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.976581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.976610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.976724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.976750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.976838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.976865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.976978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.977004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.977124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.977150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.977244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.977269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.977411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.977437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.977551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.977577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.977692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.204 [2024-11-26 18:27:23.977717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.204 qpair failed and we were unable to recover it. 00:31:36.204 [2024-11-26 18:27:23.977822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.977848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.977951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.977979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.978078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.978117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.978259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.978286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.978386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.978412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.978520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.978545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.978653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.978678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.978771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.978797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.978903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.978928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.979066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.979199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.979300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.979411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.979518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.979655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.979759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.979879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.979994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.980023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.980104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.980130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.980234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.980260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.980377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.980404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.980519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.980545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.980657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.980683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.980801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.980828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.980910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.980935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.981070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.981202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.981324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.981441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.981552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.981659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.981766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.981886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.981980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.982005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.982131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.982171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.982265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.982292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.982397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.982425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.982535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.205 [2024-11-26 18:27:23.982560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.205 qpair failed and we were unable to recover it. 00:31:36.205 [2024-11-26 18:27:23.982702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.982728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.982815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.982840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.982961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.982987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.983098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.983126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.983243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.983269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.983378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.983404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.983524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.983550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.983640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.983667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.983757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.983783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.983900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.983926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.984035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.984149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.984291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.984404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.984545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.984664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.984775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.984911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.984993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.985020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.985106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.985132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.985245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.985271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.985373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.985400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.985485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.985512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.985590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.985616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.985731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.985758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.985867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.985892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.985977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.986004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.986104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.986131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.986245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.986272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.986390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.986416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.986526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.986551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.986666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.986692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.986769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.986794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.206 qpair failed and we were unable to recover it. 00:31:36.206 [2024-11-26 18:27:23.986877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.206 [2024-11-26 18:27:23.986902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.986983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.987008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.987146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.987172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.987259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.987288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.987395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.987423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.987552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.987596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.987726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.987753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.987867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.987892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.988971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.988998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.989089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.989116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.989210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.989249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.989379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.989406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.989534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.989560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.989670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.989697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.989789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.989815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.989904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.989929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.990051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.990078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.990191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.990216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.990298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.990329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.990415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.990441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.990585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.990609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.990719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.990744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.990828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.990856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.990971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.990996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.991118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.991144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.991250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.991276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.991399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.991425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.991520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.991559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.991659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.207 [2024-11-26 18:27:23.991686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.207 qpair failed and we were unable to recover it. 00:31:36.207 [2024-11-26 18:27:23.991776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.991809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.991901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.991927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.992015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.992041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.992151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.992176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.992260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.992285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.992381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.992406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.992487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.992513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.992604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.992629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.992741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.992765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.992873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.992898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.993038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.993066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.993151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.993176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.993318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.993344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.993430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.993456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.993580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.993608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.993694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.993719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.993811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.993838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.993951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.993976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.994061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.994087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.994202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.994227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.994320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.994346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.994453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.994478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.994594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.994622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.994706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.994730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.994851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.994876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.994960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.994985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.995074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.995100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.995212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.995242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.995347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.995374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.995514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.995539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.995656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.995681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.995761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.995788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.995900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.995926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.996010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.996035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.996118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.996144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.996281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.996319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.996410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.996435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.996543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.996569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.996677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.996703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.996813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.996838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.208 [2024-11-26 18:27:23.996951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.208 [2024-11-26 18:27:23.996978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.208 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.997086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.997125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.997273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.997301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.997438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.997466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.997556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.997584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.997669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.997696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.997820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.997847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.997932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.997958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.998043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.998068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.998203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.998229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.998361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.998387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.998523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.998548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.998631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.998656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.998742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.998767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.998851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.998881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.998970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.998996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.999090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.999129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.999252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.999279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.999389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.999427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.999556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.999585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.999700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.999726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.999839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.999866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:23.999952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:23.999978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.000115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.000141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.000225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.000251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.000341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.000370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.000460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.000486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.000571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.000597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.000708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.000734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.000845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.000872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.000951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.000976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.001084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.001110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.001222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.001247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.001383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.001422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.001514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.001541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.001630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.001656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.001743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.001768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.001862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.001891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.002007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.002033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.002115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.002142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.002256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.209 [2024-11-26 18:27:24.002282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.209 qpair failed and we were unable to recover it. 00:31:36.209 [2024-11-26 18:27:24.002405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.002431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.002512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.002539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.002657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.002684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.002770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.002795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.002881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.002909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.003020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.003046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.003160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.003187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.003316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.003343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.003458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.003484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.003568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.003594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.003676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.003702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.003809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.003835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.003949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.003975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.004059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.004090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.004210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.004236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.004357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.004384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.004463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.004489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.004613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.004639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.004756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.004782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.004871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.004897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.005011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.005036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.005116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.005142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.005225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.005253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.005426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.005465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.005558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.005585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.005664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.005690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.005774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.005799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.005913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.005939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.006080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.006106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.006183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.006208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.006317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.006344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.006458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.006484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.006604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.006632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.006717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.006742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.006856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.006885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.006997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.007024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.007130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.007156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.007275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.007308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.007426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.007454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.007535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.007561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.210 [2024-11-26 18:27:24.007647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.210 [2024-11-26 18:27:24.007677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.210 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.007787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.007813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.007904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.007929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.008015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.008041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.008180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.008206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.008320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.008346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.008428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.008453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.008495] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:36.211 [2024-11-26 18:27:24.008568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.008577] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.211 [2024-11-26 18:27:24.008594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.008730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.008754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.008840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.008952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.008979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.009060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.009088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.009173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.009200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.009298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.009334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.009418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.009444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.009560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.009586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.009698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.009724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.009813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.009840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.009919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.009946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.010057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.010083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.010170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.010198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.010286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.010322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.010418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.010444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.010529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.010555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.010648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.010675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.010790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.010818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.010981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.011017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.011131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.011172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.011289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.011325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.011420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.011445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.011563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.011589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.011678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.011706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.011795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.011821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.011937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.011964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.012082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.012109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.012195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.012221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.012333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.012361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.012503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.012529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.211 [2024-11-26 18:27:24.012643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.211 [2024-11-26 18:27:24.012678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.211 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.012792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.012823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.012941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.012971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.013084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.013110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.013252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.013278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.013437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.013464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.013574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.013613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.013740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.013768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.013879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.013905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.013989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.014102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.014206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.014324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.014430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.014532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.014655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.014788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.014892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.014918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.015047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.015087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.015211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.015238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.015361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.015390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.015484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.015509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.015624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.015650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.015728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.015753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.015844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.015872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.015956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.015982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.016123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.016148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.016224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.016250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.016329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.016359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.016447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.016473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.016580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.016606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.016725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.016750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.016834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.016864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.016956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.016984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.017075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.017103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.017244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.017269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.017393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.017419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.017529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.017555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.017668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.017693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.017787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.017813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.017922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.017947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.018061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.018086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.018207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.018233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.018346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.018374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.018470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.212 [2024-11-26 18:27:24.018495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.212 qpair failed and we were unable to recover it. 00:31:36.212 [2024-11-26 18:27:24.018635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.018661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.018743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.018768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.018875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.018901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.018990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.019015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.019119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.019145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.019259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.019284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.019377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.019403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.019514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.019538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.019658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.019683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.019798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.019823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.019956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.019989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.020120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.020150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.020262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.020307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.020411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.020441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.020559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.020586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.020701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.020728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.020871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.020898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.021014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.021041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.021157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.021183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.021300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.021337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.021455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.021481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.021594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.021619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.021733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.021759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.021839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.021864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.021959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.021984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.022076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.022103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.022208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.022246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.022396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.022424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.022540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.022567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.022655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.022680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.022763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.022789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.022906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.022933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.023077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.023103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.023176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.023202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.023278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.023312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.023425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.023464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.023585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.023613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.023741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.023768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.023886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.023913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.023999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.024025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.024110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.024138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.024246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.024272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.024394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.024420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.024503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.024529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.024616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.213 [2024-11-26 18:27:24.024643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.213 qpair failed and we were unable to recover it. 00:31:36.213 [2024-11-26 18:27:24.024756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.024781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.024866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.024893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.024974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.025000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.025137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.025162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.025249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.025276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.025411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.025440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.025565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.025590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.025671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.025697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.025782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.025807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.025919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.025945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.026084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.026190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.026329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.026439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.026549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.026655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.026796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.026903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.026985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.027009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.027141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.027180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.027297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.027332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.027451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.027479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.027595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.027621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.027704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.027730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.027806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.027831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.027907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.027932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.028011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.028037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.028148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.028173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.028256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.028284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.028392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.028432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.028531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.028559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.028654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.028680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.028771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.028805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.028920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.028947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.029033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.029060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.029174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.029201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.029322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.029348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.029438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.029463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.029552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.029578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.029667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.029693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.029832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.029857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.029943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.029971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.030091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.030118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.030230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.030256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.214 qpair failed and we were unable to recover it. 00:31:36.214 [2024-11-26 18:27:24.030376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.214 [2024-11-26 18:27:24.030404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.030493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.030520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.030610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.030636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.030751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.030778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.030860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.030886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.030974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.031000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.031092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.031119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.031260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.031285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.031391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.031430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.031522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.031549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.031658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.031684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.031771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.031797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.031920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.031948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.032066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.032175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.032285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.032401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.032508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.032625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.032756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.032900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.032987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.033132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.033254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.033375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.033496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.033607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.033710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.033814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.033958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.033983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.034063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.034091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.034181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.034207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.034324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.034353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.034461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.034487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.034573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.034599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.034709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.034736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.034816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.034842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.034958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.034983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.035096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.035122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.035238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.035263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.215 [2024-11-26 18:27:24.035360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.215 [2024-11-26 18:27:24.035388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.215 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.035471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.035496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.035614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.035645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.035724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.035749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.035838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.035863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.035978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.036007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.036118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.036144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.036251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.036290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.036425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.036452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.036549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.036576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.036718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.036744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.036844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.036870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.036982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.037094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.037241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.037401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.037521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.037638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.037750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.037859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.037971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.037998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.038144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.038170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.038284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.038317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.038417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.038443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.038551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.038577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.038665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.038691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.038825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.038851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.038967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.038993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.039108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.039136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.039231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.039270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.039409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.039436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.039526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.039552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.039660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.039695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.039835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.039860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.039976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.040002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.040083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.040111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.040200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.040226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.040340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.040367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.040476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.040502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.040592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.040618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.040736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.040763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.040876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.040903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.040991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.041017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.041135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.041160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.041244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.041269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.041411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.041437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.216 [2024-11-26 18:27:24.041514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.216 [2024-11-26 18:27:24.041539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.216 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.041661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.041687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.041772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.041798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.041885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.041911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.042022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.042048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.042130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.042156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.042260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.042285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.042382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.042410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.042497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.042524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.042662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.042688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.042775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.042801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.042912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.042937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.043023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.043049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.043131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.043156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.043278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.043309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.043416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.043442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.043580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.043605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.043717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.043742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.043817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.043842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.043952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.043980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.044069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.044097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.044236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.044262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.044362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.044388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.044463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.044489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.044630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.044657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.044742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.044768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.044855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.044881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.044978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.045016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.045136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.045162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.045279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.045313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.045405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.045430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.045519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.045544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.045640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.045665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.045753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.045779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.045864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.045889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.046026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.046141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.046262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.046411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.046525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.046668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.046782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.046906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.046998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.047026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.047106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.047132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.047245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.047272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.047364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.047390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.047493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.047519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.217 qpair failed and we were unable to recover it. 00:31:36.217 [2024-11-26 18:27:24.047608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.217 [2024-11-26 18:27:24.047635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.047751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.047777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.047883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.047914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.048052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.048078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.048195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.048222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.048323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.048350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.048443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.048470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.048560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.048585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.048670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.048696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.048781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.048806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.048918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.048946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.049032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.049058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.049145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.049171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.049326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.049353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.049459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.049485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.049598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.049624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.049710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.049735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.049869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.049895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.050034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.050061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.050200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.050226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.050328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.050355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.050436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.050463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.050607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.050634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.050747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.050772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.050854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.050881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.050969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.050995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.051109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.051135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.051221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.051248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.051387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.051426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.051558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.051600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.051737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.051765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.051846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.051873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.051961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.051988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.052074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.052100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.052184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.052211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.052320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.052347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.052434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.052462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.052548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.052574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.052661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.052687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.052767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.052794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.052877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.052903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.053001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.053040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.053152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.053179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.053277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.053316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.053401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.053428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.053516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.053543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.053627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.218 [2024-11-26 18:27:24.053653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.218 qpair failed and we were unable to recover it. 00:31:36.218 [2024-11-26 18:27:24.053765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.053790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.053875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.053900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.053981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.054089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.054200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.054321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.054487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.054601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.054735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.054843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.054961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.054987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.055070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.055095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.055178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.055204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.055317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.055344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.055454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.055479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.055558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.055584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.055697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.055722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.055805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.055831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.055942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.055967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.056045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.056070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.056198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.056237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.056347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.056375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.056461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.056488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.056610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.056638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.056720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.056747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.056863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.056889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.057001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.057028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.057109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.057134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.057219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.057245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.057331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.057357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.057468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.057494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.219 qpair failed and we were unable to recover it. 00:31:36.219 [2024-11-26 18:27:24.057581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.219 [2024-11-26 18:27:24.057607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.057703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.057730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.057815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.057843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.057964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.057990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.058071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.058097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.058198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.058237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.058336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.058365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.058482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.058508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.058595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.058621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.058701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.058728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.058817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.058843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.058953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.058979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.059093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.059119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.059207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.059234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.059321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.059347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.059430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.059456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.059533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.059558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.059645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.059670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.059781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.059806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.059924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.059949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.060064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.060093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.060177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.060204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.060283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.060316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.060403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.060429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.060517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.060544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.060632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.060658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.060745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.060772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.060888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.060913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.061014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.061054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.061175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.061202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.061293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.061329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.061443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.061470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.061592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.061619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.061710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.061737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.061813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.061839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.061930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.061958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.062065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.062091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.220 qpair failed and we were unable to recover it. 00:31:36.220 [2024-11-26 18:27:24.062244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.220 [2024-11-26 18:27:24.062271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.062391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.062417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.062497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.062522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.062603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.062628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.062738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.062764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.062877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.062902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.062981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.063008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.063139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.063178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.063298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.063341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.063462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.063489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.063602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.063628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.063713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.063739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.063847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.063872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.063963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.063990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.064074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.064102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.064216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.064242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.064357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.064384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.064499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.064525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.064610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.064638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.064747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.064773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.064859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.064885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.064967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.064993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.065138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.065164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.065251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.065277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.065405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.065433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.065549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.065575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.065661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.065686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.065792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.065817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.065906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.065932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.066016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.066042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.066158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.066183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.066270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.066296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.066391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.066417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.066519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.066544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.066654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.066680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.066791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.066822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.066909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.066937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.067023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.221 [2024-11-26 18:27:24.067049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.221 qpair failed and we were unable to recover it. 00:31:36.221 [2024-11-26 18:27:24.067133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.067159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.067274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.067300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.067401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.067427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.067513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.067540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.067624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.067652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.067764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.067790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.067901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.067927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.068038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.068063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.068169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.068209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.068359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.068388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.068524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.068551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.068668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.068694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.068773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.068799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.068915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.068942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.069063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.069089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.069173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.069201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.069341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.069370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.069462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.069490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.069581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.069607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.069692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.069719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.069804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.069831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.069910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.069936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.070044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.070071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.070161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.070188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.070320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.070354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.070464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.070491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.070583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.070609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.070699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.070725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.070855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.070880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.070994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.071020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.071108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.071136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.071253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.071279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.071390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.071429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.071549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.222 [2024-11-26 18:27:24.071576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.222 qpair failed and we were unable to recover it. 00:31:36.222 [2024-11-26 18:27:24.071664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.071689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.071837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.071863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.071949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.071974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.072059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.072085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.072205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.072233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.072349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.072376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.072471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.072497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.072586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.072613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.072701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.072727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.072836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.072862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.073002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.073028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.073160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.073186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.073294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.073327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.073438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.073464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.073549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.073577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.073689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.073715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.073804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.073830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.073928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.073954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.074060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.074086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.074175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.074214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.074298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.074330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.074414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.074440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.074521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.074547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.074659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.074685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.074827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.074853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.074965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.074991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.075094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.075133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.075255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.075283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.075383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.075410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.075504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.075532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.075646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.075680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.075762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.075788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.223 [2024-11-26 18:27:24.075877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.223 [2024-11-26 18:27:24.075903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.223 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.076041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.076148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.076257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.076407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.076521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.076656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.076796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.076902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.076989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.077017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.077154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.077180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.077294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.077333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.077471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.077497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.077608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.077633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.077740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.077767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.077877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.077903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.078013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.078039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.078147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.078186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.078273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.078299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.078425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.078451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.078574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.078600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.078684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.078710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.078794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.078820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.078896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.078921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.079034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.079063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.079189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.079223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.079331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.079359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.079495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.079521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.079665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.079691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.079803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.079829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.079908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.079933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.080046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.080072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.080184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.080210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.080349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.080377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.080489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.080515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.080599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.080625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.080735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.080760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.080870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.080895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.080975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.224 [2024-11-26 18:27:24.081001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.224 qpair failed and we were unable to recover it. 00:31:36.224 [2024-11-26 18:27:24.081122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.081149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.081268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.081295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.081392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.081418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.081496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.081522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.081606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.081633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.081721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.081746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.081884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.081910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.082045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.082161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.082269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.082415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.082517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.082656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.082770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.082874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.082990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.083096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.083201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.083341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.083482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.083595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.083701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.083843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.083955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.083981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.084094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.084120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.084207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.084233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.084320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.084352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.084463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.084489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.084595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.084621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.084698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.084723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.084837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.084865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.084973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.084999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.085083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.085109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.085213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.085239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.085365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.085404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.085531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.085558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.085571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:36.225 [2024-11-26 18:27:24.085641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.085667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.085780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.225 [2024-11-26 18:27:24.085805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.225 qpair failed and we were unable to recover it. 00:31:36.225 [2024-11-26 18:27:24.085888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.085913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.086033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.086066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.086160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.086188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.086307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.086334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.086423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.086450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.086533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.086559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.086668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.086694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.086807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.086834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.086916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.086943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.087062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.087088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.087171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.087197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.087284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.087321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.087523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.087550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.087662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.087688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.087802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.087828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.087927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.087954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.088042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.088070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.088189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.088215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.088300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.088331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.088418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.088444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.088535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.088562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.088647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.088674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.088768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.088795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.088992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.089099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.089236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.089353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.089470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.089581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.089702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.089825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.089935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.089962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.090057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.090094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.090212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.090239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.090360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.090389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.090481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.090508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.090631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.090657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.090770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.226 [2024-11-26 18:27:24.090797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.226 qpair failed and we were unable to recover it. 00:31:36.226 [2024-11-26 18:27:24.090916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.090942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.091057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.091083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.091169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.091196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.091281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.091322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.091444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.091470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.091569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.091596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.091714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.091740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.091827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.091853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.091935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.091961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.092076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.092102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.092213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.092239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.092318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.092364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.092451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.092476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.092586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.092612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.092750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.092775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.092887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.092912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.093021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.093047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.093168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.093195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.093307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.093333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.093412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.093438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.093557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.093584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.093673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.093698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.093783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.093808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.093919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.093944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.094054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.094082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.094165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.094191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.094284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.094314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.094456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.094482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.094600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.094625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.094714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.094739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.094823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.094849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.094946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.094972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.095061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.095087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.095162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.095188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.095329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.095355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.095464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.095490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.095577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.095604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.095749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.227 [2024-11-26 18:27:24.095775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.227 qpair failed and we were unable to recover it. 00:31:36.227 [2024-11-26 18:27:24.095868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.095893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.096051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.096076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.096168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.096193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.096316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.096342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.096427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.096452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.096538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.096563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.096646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.096676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.096786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.096811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.096919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.096944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.097065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.097104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.097227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.097255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.097379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.097406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.097493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.097519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.097639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.097665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.097752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.097779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.097897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.097923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.098041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.098066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.098178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.098203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.098290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.098321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.098408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.098434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.098576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.098602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.098719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.098747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.098860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.098886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.098997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.099023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.099221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.099249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.099340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.099367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.099485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.099512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.099608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.099634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.099746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.099772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.099914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.099941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.100024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.100050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.100142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.100169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.100276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.100313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.100440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.100467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.228 [2024-11-26 18:27:24.100552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.228 [2024-11-26 18:27:24.100577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.228 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.100667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.100692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.100777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.100802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.100885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.100910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.101021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.101047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.101183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.101209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.101286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.101318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.101408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.101434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.101524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.101550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.101644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.101669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.101751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.101776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.101890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.101916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.102050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.102168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.102318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.102436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.102544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.102653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.102777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.102890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.102976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.103002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.103087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.103113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.103224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.103251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.103336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.103362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.103477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.103502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.103590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.103616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.103730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.103761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.103876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.103902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.103983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.104008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.104107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.104135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.104220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.104247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.104335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.104362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.104445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.104472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.104605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.104632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.104743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.104769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.104857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.229 [2024-11-26 18:27:24.104883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.229 qpair failed and we were unable to recover it. 00:31:36.229 [2024-11-26 18:27:24.104995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.105104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.105221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.105368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.105511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.105621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.105727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.105862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.105967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.105992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.106080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.106106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.106245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.106271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.106392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.106418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.106502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.106528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.106643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.106668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.106783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.106808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.106919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.106944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.107024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.107050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.107181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.107232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.107359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.107388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.107478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.107504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.107592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.107619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.107734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.107760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.107843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.107869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.108010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.108036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.108128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.108153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.108236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.108261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.108390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.108416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.108502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.108527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.108661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.108807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.108832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.108945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.108970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.109057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.109083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.109220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.109245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.109364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.109390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.109479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.109504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.109587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.109612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.109695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.109721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.109831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.109856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.230 [2024-11-26 18:27:24.109941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.230 [2024-11-26 18:27:24.109969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.230 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.110065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.110092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.110237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.110263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.110383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.110410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.110490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.110517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.110650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.110676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.110765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.110795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.110910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.110936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.111050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.111075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.111158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.111183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.111323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.111349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.111470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.111495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.111576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.111602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.111690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.111716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.111825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.111850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.111930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.111956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.112042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.112067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.112146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.112171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.112253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.112282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.112371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.112397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.112516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.112543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.112657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.112683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.112814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.112840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.112931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.112956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.113040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.113067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.113149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.113175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.113260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.113285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.113430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.113456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.113543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.113568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.113681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.113706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.113791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.113819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.113935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.113961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.114070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.114096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.114189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.114221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.114332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.114359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.114478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.114504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.231 [2024-11-26 18:27:24.114607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.231 [2024-11-26 18:27:24.114634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.231 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.114771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.114797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.114877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.114903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.115044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.115151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.115265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.115375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.115512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.115653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.115791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.115899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.115987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.116014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.116105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.116132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.116263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.116289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.116395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.116421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.116506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.116533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.116655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.116681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.116798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.116824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.116908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.116934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.117020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.117045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.117138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.117163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.117253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.117279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.117418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.117460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.117584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.117612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.117705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.117733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.117824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.117850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.117962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.117987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.118071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.118096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.118188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.118215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.118310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.118336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.118440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.118466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.118572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.118597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.118676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.118701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.118816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.118841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.232 [2024-11-26 18:27:24.118961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.232 [2024-11-26 18:27:24.118989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.232 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.119105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.119132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.119218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.119245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.119360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.119388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.119525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.119552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.119635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.119661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.119775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.119802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.119888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.119915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.120046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.120072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.120192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.120218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.120300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.120351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.120469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.120495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.120585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.120610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.120720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.120745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.120858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.120884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.121000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.121027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.121169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.121195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.121332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.121371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.121466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.121493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.121583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.121609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.121695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.121721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.121834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.121860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.121944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.121969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.122054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.122080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.122167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.122192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.122275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.122299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.122420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.122445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.122532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.122561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.122678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.122704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.122790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.122816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.122936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.122962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.123058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.123084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.123171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.123198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.123281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.123310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.123394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.123419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.123538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.123563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.123675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.123701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.233 [2024-11-26 18:27:24.123840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.233 [2024-11-26 18:27:24.123865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.233 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.123946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.123971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.124078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.124104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.124210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.124236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.124322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.124348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.124428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.124453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.124566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.124591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.124710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.124749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.124871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.124899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.125018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.125045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.125156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.125182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.125315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.125355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.125452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.125481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.125593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.125620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.125734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.125761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.125853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.125881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.125970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.125996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.126086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.126112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.126217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.126242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.126351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.126377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.126459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.126484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.126598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.126623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.126765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.126793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.126885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.126913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.127024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.127050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.127127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.127153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.127259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.127285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.127378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.127405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.127483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.127510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.127644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.127670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.127783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.127891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.127916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.128022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.128048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.128133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.128158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.128245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.128272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.128363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.128390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.128479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.128504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.128589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.128616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.128733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.234 [2024-11-26 18:27:24.128759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.234 qpair failed and we were unable to recover it. 00:31:36.234 [2024-11-26 18:27:24.128839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.128865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.128954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.128981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.129063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.129089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.129201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.129226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.129343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.129370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.129480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.129506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.129589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.129614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.129699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.129726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.129815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.129846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.129934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.129960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.130069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.130094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.130185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.130214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.130313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.130340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.130430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.130456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.130545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.130572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.130657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.130684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.130794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.130820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.130907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.130933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.131071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.131097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.131210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.131236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.131323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.131351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.131459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.131485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.131577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.131603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.131690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.131715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.131800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.131825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.131961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.131987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.132085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.132124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.132220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.132247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.132340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.132368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.132452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.132480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.132575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.132601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.132714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.132740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.132851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.132879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.132960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.132985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.133072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.133098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.133232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.133263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.133358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.133384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.133495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.235 [2024-11-26 18:27:24.133521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.235 qpair failed and we were unable to recover it. 00:31:36.235 [2024-11-26 18:27:24.133611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.133638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.133721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.133747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.133824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.133849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.133957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.133985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.134066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.134093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.134216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.134256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.134383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.134410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.134524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.134550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.134638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.134664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.134753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.134779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.134891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.134918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.135057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.135097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.135215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.135243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.135362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.135389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.135470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.135496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.135583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.135609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.135744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.135770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.135880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.135906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.135988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.136013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.136102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.136128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.136211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.136239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.136354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.136380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.136468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.136494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.136606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.136631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.136739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.136765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.136876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.136901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.136987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.137015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.137117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.137157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.137310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.137339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.137427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.137454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.137540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.137567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.137651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.137679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.137767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.137793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.137901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.137926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.138037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.138063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.138173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.138199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.138321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.138347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.138451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.236 [2024-11-26 18:27:24.138476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.236 qpair failed and we were unable to recover it. 00:31:36.236 [2024-11-26 18:27:24.138584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.138609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.138688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.138714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.138821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.138846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.138960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.138988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.139118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.139146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.139288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.139327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.139416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.139444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.139568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.139595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.139683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.139710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.139824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.139850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.139968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.139997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.140085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.140112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.140194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.140220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.140314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.140342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.140434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.140461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.140566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.140592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.140676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.140702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.140811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.140839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.140948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.140974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.141087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.141114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.141224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.141249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.141338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.141366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.141485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.141512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.141633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.141659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.141743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.141770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.141860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.141887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.142002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.142035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.142139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.142178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.142263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.142290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.142385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.142411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.142517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.142542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.237 [2024-11-26 18:27:24.142626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.237 [2024-11-26 18:27:24.142652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.237 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.142786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.142812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.142895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.142921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.143058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.143083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.143186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.143212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.143324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.143350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.143442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.143470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.143566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.143592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.143700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.143727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.143848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.143874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.143989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.144014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.144127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.144153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.144239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.144269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.144367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.144393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.144484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.144510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.144590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.144616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.144733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.144759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.144866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.144892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.144980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.145109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.145213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.145338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.145456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.145571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.145681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.145852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.145968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.145993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.146074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.146099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.146188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.146213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.146324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.146349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.146429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.146454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.146532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.146558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.146676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.146705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.146793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.146821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.146936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.146963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.147052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.147078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.147174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.147202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.147296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.147335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.147435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.238 [2024-11-26 18:27:24.147461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.238 qpair failed and we were unable to recover it. 00:31:36.238 [2024-11-26 18:27:24.147548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.147574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.147702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.147728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.147815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.147840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.147931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.147956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.148038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.148064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.148192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.148218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.148290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.148324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.148402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.148427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.148512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.148537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.148602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.239 [2024-11-26 18:27:24.148635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events[2024-11-26 18:27:24.148633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 at runtime. 00:31:36.239 [2024-11-26 18:27:24.148654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the[2024-11-26 18:27:24.148659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with only 00:31:36.239 addr=10.0.0.2, port=4420 00:31:36.239 [2024-11-26 18:27:24.148674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.148686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.239 [2024-11-26 18:27:24.148743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.239 [2024-11-26 18:27:24.148768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.239 qpair failed and we were unable to recover it. 00:31:36.239 [2024-11-26 18:27:24.148852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.148876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.148959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.148989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.149071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.149113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.149212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.149239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.149337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.149364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.149489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.149515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.149618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.149659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.149767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.149794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.149883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.149908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.150027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.150056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.150149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.150176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.150271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.150313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.150337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:36.523 [2024-11-26 18:27:24.150406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.150434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.150387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:36.523 [2024-11-26 18:27:24.150436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:36.523 [2024-11-26 18:27:24.150439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:36.523 [2024-11-26 18:27:24.150530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.150557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.150650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.150675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.150836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.150863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.150974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.151000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.151114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.151141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.151231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.151257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.151363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.151392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.151510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.151538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.151639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.151664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.151763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.151788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.151887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.151924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.152030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.152060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.152168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.152195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.152281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.152322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.152444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.152470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.152553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.152579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.152662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.152687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.523 [2024-11-26 18:27:24.152778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.523 [2024-11-26 18:27:24.152805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.523 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.152937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.152963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.153088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.153117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.153212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.153238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.153347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.153374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.153486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.153513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.153606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.153635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.153731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.153758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.153845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.153873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.153957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.153983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.154069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.154096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.154178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.154204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.154287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.154326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.154406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.154431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.154552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.154603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.154696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.154722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.154812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.154839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.154935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.154963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.155047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.155075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.155170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.155197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.155292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.155328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.155417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.155443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.155553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.155583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.155702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.155728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.155820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.155846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.155928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.155955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.156046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.156073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.156172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.156204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.156323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.156351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.156442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.156468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.156579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.156605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.156689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.156715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.156799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.156825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.156911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.156942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.157039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.157080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.157170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.157197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.157282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.157321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.157419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.157445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.524 [2024-11-26 18:27:24.157534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.524 [2024-11-26 18:27:24.157560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.524 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.157645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.157671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.157758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.157784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.157870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.157895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.158919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.158946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.159964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.159990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.160104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.160137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.160235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.160263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.160360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.160388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.160473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.160499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.160586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.160613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.160728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.160754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.160848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.160874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.160963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.160991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.161083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.161110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.161206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.161233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.161328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.161355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.161439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.161466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.161559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.161597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.161693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.161720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.161827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.161853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.525 [2024-11-26 18:27:24.161937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.525 [2024-11-26 18:27:24.161964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.525 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.162954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.162980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.163063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.163088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.163175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.163200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.163294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.163329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.163425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.163464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.163555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.163582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.163669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.163695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.163810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.163836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.163964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.163990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.164097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.164208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.164325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.164441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.164556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.164659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.164773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.164877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.164991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.165017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.165097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.165122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.165197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.165223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.165317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.165343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.165441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.526 [2024-11-26 18:27:24.165468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.526 qpair failed and we were unable to recover it. 00:31:36.526 [2024-11-26 18:27:24.165558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.165584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.165676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.165702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.165808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.165834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.165917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.165944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.166947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.166973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.167933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.167958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.168044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.168070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.168176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.168201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.168286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.168325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.168412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.168439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.168528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.168555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.168672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.168698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.168779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.168806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.168883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.168909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.169023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.169050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.169130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.169156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.169235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.527 [2024-11-26 18:27:24.169260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.527 qpair failed and we were unable to recover it. 00:31:36.527 [2024-11-26 18:27:24.169356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.169382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.169467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.169493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.169583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.169612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.169707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.169733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.169816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.169842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.169930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.169956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.170073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.170099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.170177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.170203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.170286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.170333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.170419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.170445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.170551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.170577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.170690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.170718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.170814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.170839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.170921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.170946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.171948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.171974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.172058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.172085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.172170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.172196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.172279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.172312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.172394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.172420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.172498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.172524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.172632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.172658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.528 [2024-11-26 18:27:24.172751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.528 [2024-11-26 18:27:24.172778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.528 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.172859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.172885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.172973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.172999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.173084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.173110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.173186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.173213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.173321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.173347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.173434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.173459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.173562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.173602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.173696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.173724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.173843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.173871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.173955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.173980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.174066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.174092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.174201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.174227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.174312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.174343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.174438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.174464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.174548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.174573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.174658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.174686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.174782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.174807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.174901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.174930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.175917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.175944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.176948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.176975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.177091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.177116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.529 qpair failed and we were unable to recover it. 00:31:36.529 [2024-11-26 18:27:24.177201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.529 [2024-11-26 18:27:24.177228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.177346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.177372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.177450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.177480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.177568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.177594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.177683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.177710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.177789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.177815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.177907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.177934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.178041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.178067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.178198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.178238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.178332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.178360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.178475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.178502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.178596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.178622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.178705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.178730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.178817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.178842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.178927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.178955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.179047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.179073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.179178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.179207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.179300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.179336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.179417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.179442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.179524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.179550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.179640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.179667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.179780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.179806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.179889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.179915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.180001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.180027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.180160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.180185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.180272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.180300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.180395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.180423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.180512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.180538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.180626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.180653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.180764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.180795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.530 [2024-11-26 18:27:24.180914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.530 [2024-11-26 18:27:24.180940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.530 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.181023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.181050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.181138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.181166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.181250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.181276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.181380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.181408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.181521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.181546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.181633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.181659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.181806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.181832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.181911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.181937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.182045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.182163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.182282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.182404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.182548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.182661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.182798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.182909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.182998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.183142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.183246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.183364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.183477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.183589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.183703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.183841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.183945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.183971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.184964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.184990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.185070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.185096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.185174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.185200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.531 [2024-11-26 18:27:24.185286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.531 [2024-11-26 18:27:24.185321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.531 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.185410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.185436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.185543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.185569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.185657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.185682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.185760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.185786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.185859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.185885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.185974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.186899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.186985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.187012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.187109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.187136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.187219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.187246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.187336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.187364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.187502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.187528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.187609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.187636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.187721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.187748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.187837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.187863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.187975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.188082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.188190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.188294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.188449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.188556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.188662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.188786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.188903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.188931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.189016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.532 [2024-11-26 18:27:24.189043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.532 qpair failed and we were unable to recover it. 00:31:36.532 [2024-11-26 18:27:24.189134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.189161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.189252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.189277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.189371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.189398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.189487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.189513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.189630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.189655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.189741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.189766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.189855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.189883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.190920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.190947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.191059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.191100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.191196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.191224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.191341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.191368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.191450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.191476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.191550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.191577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.191665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.191692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.191784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.191810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.191892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.191923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.192009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.192037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.192134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.192174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.192292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.192334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.192417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.192443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.192529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.192555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.533 [2024-11-26 18:27:24.192637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.533 [2024-11-26 18:27:24.192662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.533 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.192753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.192778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.192861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.192888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.192979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.193092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.193205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.193341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.193450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.193573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.193682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.193816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.193925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.193951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.194972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.194999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.195081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.195111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.195202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.195228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.195346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.195375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.195458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.195484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.195574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.195600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.195677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.195703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.195789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.195816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.195897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.195924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.196931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.534 [2024-11-26 18:27:24.196959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.534 qpair failed and we were unable to recover it. 00:31:36.534 [2024-11-26 18:27:24.197043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.197071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.197159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.197184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.197298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.197332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.197414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.197440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.197523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.197548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.197628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.197655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.197748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.197776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.197862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.197888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.197977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.198081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.198189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.198329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.198439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.198543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.198704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.198808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.198915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.198940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.199028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.199056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.199154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.199183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.199266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.199292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.199384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.199410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.199529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.199556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.199645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.199671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.199782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.199807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.199929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.199955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.200068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.200177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.200289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.200416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.200530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.200640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.200760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.535 [2024-11-26 18:27:24.200875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.535 qpair failed and we were unable to recover it. 00:31:36.535 [2024-11-26 18:27:24.200964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.200990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.201073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.201100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.201182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.201207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.201294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.201326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.201424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.201462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.201571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.201598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.201686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.201713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.201793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.201819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.201902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.201928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.202937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.202968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.203077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.203102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.203186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.203212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.203294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.203331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.203447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.203473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.203555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.203581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.203692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.203717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.203801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.203826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.203912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.203941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.204045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.204153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.204255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.204403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.204515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.204632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.204768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.204874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.204982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.205008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.205083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.205109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.205191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.205216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.536 [2024-11-26 18:27:24.205312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.536 [2024-11-26 18:27:24.205340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.536 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.205429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.205456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.205548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.205574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.205651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.205677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.205757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.205782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.205869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.205895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.205977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.206003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.206092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.206125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.206355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.206394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.206498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.206526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.206612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.206637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.206718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.206743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.206828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.206853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.206964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.206989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.207071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.207096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.207177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.207202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.207322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.207349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.207435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.207461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.207540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.207566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.207655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.207681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.207762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.207792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.207871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.207897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.208920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.208946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.209046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.209086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.209207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.209234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.209325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.209352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.537 [2024-11-26 18:27:24.209450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.537 [2024-11-26 18:27:24.209477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.537 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.209594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.209620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.209704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.209730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.209852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.209878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.209966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.209992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.210079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.210105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.210214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.210239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.210362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.210388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.210474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.210500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.210585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.210610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.210720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.210745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.210829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.210854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.210940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.210965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.211075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.211105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.211197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.211237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.211337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.211365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.211450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.211476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.211565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.211591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.211673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.211698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.211785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.211810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.211897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.211923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.212033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.212058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.212165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.212190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.212266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.212295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.212433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.212459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.212574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.212601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.212711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.212737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.212834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.212860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.212945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.212971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.213084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.213110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.213211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.213250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.213378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.213407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.213491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.213518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.213598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.213624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.538 qpair failed and we were unable to recover it. 00:31:36.538 [2024-11-26 18:27:24.213707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.538 [2024-11-26 18:27:24.213733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.213825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.213851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.213931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.213957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.214065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.214091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.214168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.214195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.214284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.214317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.214411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.214438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.214531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.214557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.214669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.214695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.214809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.214834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.215053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.215164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.215299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.215414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.215525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.215686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.215798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.215897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.215975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.216089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.216208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.216324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.216458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.216609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.216735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.216848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.216964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.216990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.217073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.217098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.217178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.217203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.217279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.217312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.217419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.217444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.217634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.217660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.217740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.217765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.217915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.539 [2024-11-26 18:27:24.217951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.539 qpair failed and we were unable to recover it. 00:31:36.539 [2024-11-26 18:27:24.218056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.218183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.218295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.218418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.218526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.218642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.218753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.218867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.218966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.218992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.219076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.219103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.219190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.219215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.219297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.219328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.219437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.219462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.219557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.219582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.219667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.219692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.219780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.219806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.219892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.219917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.220033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.220058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.220133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.220158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.220238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.220263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.220461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.220488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.220569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.220594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.220683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.220708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.220795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.220821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.220906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.220931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.221018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.221043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.221151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.221190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.221278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.221319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.221441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.221468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.221555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.221582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.221675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.221701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.221787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.221815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.221908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.221934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.222050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.222075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.222161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.222186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.222282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.222314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.222409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.222434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.222548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.222573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.540 qpair failed and we were unable to recover it. 00:31:36.540 [2024-11-26 18:27:24.222687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.540 [2024-11-26 18:27:24.222712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.222796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.222821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.222914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.222939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.223031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.223059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.223178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.223204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.223283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.223318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.223428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.223454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.223535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.223561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.223639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.223665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.223755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.223781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.223869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.223896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.224939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.224965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.225080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.225220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.225335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.225446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.225551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.225657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.225773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.225893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.225999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.226031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.226125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.226164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.226280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.226318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.226442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.226468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.226551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.226576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.226658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.226684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.541 qpair failed and we were unable to recover it. 00:31:36.541 [2024-11-26 18:27:24.226791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.541 [2024-11-26 18:27:24.226816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.226898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.226924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.227925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.227952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.228972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.228999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.229091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.229119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.229195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.229226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.229311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.229338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.229420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.229446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.229542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.229568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.229662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.229689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.229797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.229822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.229902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.229927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.230007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.230033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.230121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.230146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.230225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.230251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.230339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.230366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.230458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.230484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.542 qpair failed and we were unable to recover it. 00:31:36.542 [2024-11-26 18:27:24.230576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.542 [2024-11-26 18:27:24.230602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.230712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.230738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.230840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.230868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.230958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.230984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.231098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.231231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.231349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.231465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.231575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.231680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.231791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.231899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.231990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.232015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.232098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.232123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.232212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.232239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.232335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.232367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.232448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.232474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.232608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.232634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.232746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.232772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.232900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.232926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.233928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.233953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.234955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.234981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.543 qpair failed and we were unable to recover it. 00:31:36.543 [2024-11-26 18:27:24.235076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.543 [2024-11-26 18:27:24.235101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.235180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.235206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.235290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.235323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.235417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.235444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.235542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.235579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.235680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.235706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.235821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.235847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.235923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.235949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.236952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.236980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.237069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.237099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.237213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.237238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.237324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.237351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.237434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.237459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.237536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.237561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.237672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.237697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.237777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.237802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.237928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.237953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.238036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.238062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.238145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.238170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.238274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.238300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.238402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.238441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.238528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.238555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.238690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.238716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.238801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.238826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.238907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.238933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.239009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.239034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.239145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.239172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.239257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.239283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.239369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.239395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.239491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.239516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.544 [2024-11-26 18:27:24.239601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.544 [2024-11-26 18:27:24.239626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.544 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.239715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.239741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.239826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.239852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.239943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.239968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.240079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.240191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.240311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.240424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.240574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.240680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.240785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.240893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.240980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.241081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.241193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.241293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.241406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.241514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.241634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.241784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.241923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.241949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.242951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.242980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.243064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.243092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.243203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.243230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.243326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.243353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.243440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.243474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.243566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.243591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.243675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.243701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.243785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.243811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.243930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.545 [2024-11-26 18:27:24.243957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.545 qpair failed and we were unable to recover it. 00:31:36.545 [2024-11-26 18:27:24.244045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.244155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.244258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.244383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.244497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.244634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.244745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.244850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.244957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.244983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.245104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.245211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.245329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.245451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.245562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.245668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.245773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.245887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.245998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.246024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.246150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.246189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.246313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.246341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.246422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.246448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.246531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.246556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.246673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.246699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.246784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.246811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.246896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.246921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.247030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.247144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.247264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.247394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.247530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.247663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.247768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.247883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.247990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.248015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.248094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.248120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.248209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.248242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.248333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.248361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.248441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.248467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.546 [2024-11-26 18:27:24.248544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.546 [2024-11-26 18:27:24.248569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.546 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.248650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.248676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.248769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.248795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.248901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.248927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.249921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.249948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.250054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.250092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.250210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.250236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.250369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.250397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.250481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.250507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.250597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.250624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.250731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.250757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.250835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.250860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.250944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.250969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.251089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.251192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.251300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.251429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.251552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.251667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.251789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.251906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.251996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.252023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.252113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.252141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.252235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.252261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.252353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.252379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.252464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.252490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.547 [2024-11-26 18:27:24.252578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.547 [2024-11-26 18:27:24.252605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.547 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.252695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.252721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.252805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.252832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.252925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.252952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.253936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.253965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.254077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.254224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.254339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.254451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.254573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.254689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.254804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.254913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.254995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.255101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.255211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.255322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.255430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.255539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.255644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.255748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.255881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.255906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.256094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.256215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.256329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.256439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.256555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.256666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.256789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.256902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.256996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.548 [2024-11-26 18:27:24.257023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.548 qpair failed and we were unable to recover it. 00:31:36.548 [2024-11-26 18:27:24.257110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.257136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.257218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.257244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.257330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.257358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.257443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.257468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.257549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.257574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.257657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.257682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.257766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.257794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.257881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.257907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.258949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.258975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.259093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.259209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.259321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.259462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.259573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.259681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.259788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.259899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.259976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.260917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.260999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.261025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.261107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.261133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.261206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.261231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.261318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.549 [2024-11-26 18:27:24.261345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.549 qpair failed and we were unable to recover it. 00:31:36.549 [2024-11-26 18:27:24.261424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.261449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.261535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.261561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.261674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.261700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.261779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.261804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.261890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.261919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.262893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.262920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.263890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.263998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.264115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.264243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.264360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.264469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.264602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.264705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.264810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.264920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.264946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.265057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.265082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.265165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.550 [2024-11-26 18:27:24.265192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.550 qpair failed and we were unable to recover it. 00:31:36.550 [2024-11-26 18:27:24.265280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.265320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.265406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.265433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.265516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.265543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.265626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.265653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.265739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.265766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.265847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.265875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.265955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.265981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.266056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.266082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.266188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.266214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.266292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.266335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.266428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.266453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.266586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.266611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.266696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.266721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.266819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.266845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.266929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.266956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.267952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.267978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.268113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.268216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.268338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.268461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.268576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.268685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.268797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.268914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.268997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.269023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.269103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.269130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.269213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.269238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.269326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.269353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.269437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.269463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.551 [2024-11-26 18:27:24.269572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.551 [2024-11-26 18:27:24.269597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.551 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.269683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.269709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.269793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.269820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.269904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.269930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.270930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.270956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.271911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.271937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.552 [2024-11-26 18:27:24.272924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.552 [2024-11-26 18:27:24.272955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.552 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.273926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.273953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.274045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.274085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.274172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.274199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.274282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.274315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.274427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.274453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.274541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.274567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.274651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.274676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.274791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.274818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.274909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.274938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.275923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.275950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.276922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.276948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.277055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.277080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.277165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.277192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.277274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.553 [2024-11-26 18:27:24.277301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.553 qpair failed and we were unable to recover it. 00:31:36.553 [2024-11-26 18:27:24.277624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.277649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.277728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.277754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.277843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.277869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.277951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.277977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.278082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.278202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.278325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.278436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.278543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.278656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.278793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.278897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.278982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.279007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.279084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.279109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.279188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.279214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.279319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.279356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.279470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.279510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.554 [2024-11-26 18:27:24.279604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.279633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.279711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.279737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.279822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:31:36.554 [2024-11-26 18:27:24.279849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.279942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.279971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.280053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.280081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.280161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.280189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:36.554 [2024-11-26 18:27:24.280275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.280307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.280388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.280413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.280497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.280524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:36.554 [2024-11-26 18:27:24.280605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.280630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.280721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.280747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.554 [2024-11-26 18:27:24.280834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.554 [2024-11-26 18:27:24.280861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.554 qpair failed and we were unable to recover it. 00:31:36.554 [2024-11-26 18:27:24.280946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.280971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.281958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.281984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.282957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.282983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.283097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.283201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.283322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.283437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.283573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.283689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.283795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.283901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.283979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.284093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.284199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.284322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.284433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.284542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.284659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.284780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.284953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.555 [2024-11-26 18:27:24.284980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.555 qpair failed and we were unable to recover it. 00:31:36.555 [2024-11-26 18:27:24.285065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.285093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.285169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.285195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.285311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.285338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.285416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.285443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.285527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.285554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.285634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.285660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.285749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.285776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.285890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.285916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.285993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.286102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.286209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.286319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.286423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.286528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.286646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.286778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.286935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.286961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.287955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.287981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.288062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.288090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.288178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.288204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.288286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.288326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.288448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.288474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.288550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.556 [2024-11-26 18:27:24.288576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.556 qpair failed and we were unable to recover it. 00:31:36.556 [2024-11-26 18:27:24.288658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.288683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.288795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.288821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.288899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.288925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.289016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.289055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.289150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.289179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.289289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.289338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.289433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.289459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.289543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.289569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.289651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.289678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.289751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.289778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.289903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.289928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.290893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.290919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.291027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.291162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.291280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.291415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.291557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.291683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.291790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.291909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.291990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.292016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.292123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.292150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.292234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.292270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.292378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.292404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.292486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.292511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.292586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.557 [2024-11-26 18:27:24.292612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.557 qpair failed and we were unable to recover it. 00:31:36.557 [2024-11-26 18:27:24.292709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.292736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.292817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.292843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.292927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.292953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.293936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.293962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.294040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.294066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.294147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.294175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.294251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.294279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.294377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.294412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.294520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.294547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.294634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.294660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.294754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.294782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.294867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.294894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.295027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.295157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.295277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.295398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.295536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.295644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.295750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.295887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.295995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.296020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.296111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.296136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.296210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.296235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.296324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.296351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.558 qpair failed and we were unable to recover it. 00:31:36.558 [2024-11-26 18:27:24.296444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.558 [2024-11-26 18:27:24.296472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.296557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.296582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.296665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.296691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.296767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.296793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.296881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.296907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.296989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.297892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.297982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.298117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.298224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.298341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.298458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.298567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.298713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.298825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.298938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.298966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.299046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.299072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.299194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.299234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.299359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.299386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.299472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.299498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.299581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.299607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.299684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.299710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.299785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.299811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.299916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.299941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.300023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.300048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.300133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.300160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.300240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.300267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.300392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.300421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.300507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.300533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.300610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.300637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.300725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.559 [2024-11-26 18:27:24.300752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.559 qpair failed and we were unable to recover it. 00:31:36.559 [2024-11-26 18:27:24.300831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.300857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.300941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.300967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.301073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.301182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.301288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.301412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.301542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.301686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.301796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.301908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.301990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.302016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.302098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.302124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.302234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.302261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.302361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.302388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.302472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.302498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.302590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.302616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.302743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.302770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.302853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.302880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.303919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.303945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.304026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.304051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.304195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.304220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.304315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.304343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.304439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.304465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.304545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.304571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.304655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.304681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.304796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.304822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.304906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.304933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.305028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.305067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.305184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.305211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.305297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.305332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.560 qpair failed and we were unable to recover it. 00:31:36.560 [2024-11-26 18:27:24.305419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.560 [2024-11-26 18:27:24.305447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.305538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.305564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.305644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.305669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.305749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.305781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.305870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.305899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:36.561 [2024-11-26 18:27:24.305983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.306011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.306114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.306140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.306221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.306247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.306336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.306362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:36.561 [2024-11-26 18:27:24.306444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.306469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.306550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.561 [2024-11-26 18:27:24.306576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.306680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.306705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.306791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.306819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.561 [2024-11-26 18:27:24.306926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.306953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.307092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.307204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.307320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.307424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.307534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.307668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.307801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.307917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.307998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.308024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.308126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.308165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.308249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.308275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.308368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.308395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.308505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.308531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.308620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.308646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.308748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.308782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.308912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.308939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.309052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.309078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.309164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.309191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.309280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.561 [2024-11-26 18:27:24.309313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.561 qpair failed and we were unable to recover it. 00:31:36.561 [2024-11-26 18:27:24.309412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.309438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.309525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.309551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.309651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.309678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.309786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.309811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.309921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.309949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.310030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.310056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.310149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.310188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.310272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.310300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.310402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.310434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.310515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.310540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.310653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.310678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.310811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.310836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.310932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.310959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.311070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.311179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.311330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.311450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.311554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.311661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.311779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.311891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.311974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.312087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.312223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.312358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.312470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.312578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.312683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.312799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.312901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.312926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.313961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.313987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.314077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.314116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.314229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.314256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.562 qpair failed and we were unable to recover it. 00:31:36.562 [2024-11-26 18:27:24.314349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.562 [2024-11-26 18:27:24.314377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.314476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.314502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.314583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.314608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.314731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.314757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.314849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.314874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.314985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.315011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.315101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.315127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.315240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.315265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.315388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.315428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.315533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.315561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.315673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.315700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.315784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.315810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.315926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.315952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.316048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.316076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.316162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.316188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.316265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.316292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.316393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.316432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.316541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.316569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.316659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.316685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.316776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.316802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.316879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.316905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.317953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.317979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.318087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.318113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.318197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.318223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.318349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.318387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.318476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.318501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.318584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.318609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.318748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.318774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.318880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.318906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.319044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.319148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.319249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.319367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.319477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.319586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.319694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.563 [2024-11-26 18:27:24.319802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.563 qpair failed and we were unable to recover it. 00:31:36.563 [2024-11-26 18:27:24.319889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.319917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.319995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.320104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.320232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.320378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.320488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.320599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.320707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.320819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.320922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.320947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.321054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.321080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.321160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.321185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.321291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.321325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.321409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.321435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.321521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.321546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.321629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.321654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.321737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.321768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.321844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.321869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.322034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.322137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.322265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.322420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.322540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.322678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.322791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.322894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.322973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.323095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.323256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.323391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.323521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.323631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.323740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.323840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.323953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.323979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.324078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.324117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.324237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.324264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.324357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.324383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.324466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.324491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.324601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.324627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.324712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.324737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.324825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.324850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.324948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.324975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.325065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.325097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.564 [2024-11-26 18:27:24.325184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.564 [2024-11-26 18:27:24.325211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.564 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.325296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.325327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.325421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.325447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.325559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.325585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.325680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.325706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.325814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.325840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.325918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.325944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.326025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.326051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.326160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.326185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.326267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.326295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.326401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.326427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.326512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.326538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.326619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.326644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.326762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.326788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.326882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.326907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.327045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.327155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.327262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.327397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.327553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.327693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.327800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.327908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.327989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.328014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.328095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.328120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.328238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.328264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.328389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.328421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.328524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.328550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.328658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.328683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.328766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.328791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.328926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.328951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.329028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.329053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.329130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.329156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.329279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.329315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.329430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.329469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.329567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.329594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.329678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.329704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.329780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.329805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.329887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.329913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.330003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.330030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.330171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.565 [2024-11-26 18:27:24.330210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.565 qpair failed and we were unable to recover it. 00:31:36.565 [2024-11-26 18:27:24.330314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.330343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.330447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.330474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.330563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.330588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.330682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.330708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.330797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.330823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.330905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.330931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.331016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.331041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.331139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.331177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.331268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.331296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.331402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.331428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.331509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.331535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.331620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.331646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.331738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.331771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.331929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.331956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.332953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.332978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.333060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.333086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.333168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.333195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.333290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.333322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.333410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.333438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.333524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.333550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.333661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.333687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.333764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.333789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.333900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.333927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.334041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.334158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.334266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.334397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.334518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.334626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.334769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.334879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.334997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.335024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.335103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.335129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.335208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.335233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.335322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.335348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.335454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.566 [2024-11-26 18:27:24.335480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.566 qpair failed and we were unable to recover it. 00:31:36.566 [2024-11-26 18:27:24.335569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.335596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.335678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.335704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.335786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.335812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.335890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.335916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.336030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.336070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.336186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.336213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.336297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.336331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.336426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.336451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.336532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.336564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.336645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.336670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.336755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.336781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.336893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.336918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.337909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.337934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.338965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.338990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.339076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.339101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.339183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.339208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.339289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.339323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.339413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.339440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.339556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.339582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.339665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.339696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.339786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.339811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.339893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.339919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.340026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.340052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.340136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.340162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.340254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.340281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.340385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.340418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.340526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.340551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.340634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.340659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.340747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.567 [2024-11-26 18:27:24.340773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.567 qpair failed and we were unable to recover it. 00:31:36.567 [2024-11-26 18:27:24.340877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.340902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.341901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.341927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.342020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.342060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.342151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.342180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.342293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.342325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.342418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.342444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.342533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.342559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.342651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.342677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.342793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.342820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.342897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.342927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.343936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.343962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.344047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.344072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.344158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.344186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.344267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.344293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.344394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.344421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.344523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.344548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.344656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.344682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.344766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.344792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.344878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.344904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.345038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.345167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.345279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.345428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.345534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.345639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.345776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.345910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.345996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.346023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.346117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.346148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.346230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.346256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.568 qpair failed and we were unable to recover it. 00:31:36.568 [2024-11-26 18:27:24.346358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.568 [2024-11-26 18:27:24.346384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.346476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.346502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.346586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.346611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.346700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.346726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.346808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.346834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.346911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.346936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.347016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.347043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.347146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.347186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.347284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.347317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.347410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.347436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.347532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.347559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.347639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.347665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.347781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.347807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.347893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.347919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.348941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.348969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.349080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.349106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.349200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.349228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.349316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.349358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.349455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.349482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.349567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.349594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.349677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.349703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.349786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.349811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.349894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.349919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.350953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.350979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.351064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.351089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.351168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.351193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.351294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.351327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.569 qpair failed and we were unable to recover it. 00:31:36.569 [2024-11-26 18:27:24.351415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.569 [2024-11-26 18:27:24.351442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.351523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.351550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 Malloc0 00:31:36.570 [2024-11-26 18:27:24.351637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.351662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.351770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.351796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.351884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.351910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.352005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.352032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.352114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.352140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.352233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:36.570 [2024-11-26 18:27:24.352261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.352360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.352387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.570 [2024-11-26 18:27:24.352497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.352522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.570 [2024-11-26 18:27:24.352608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.352633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.352716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.352741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.352825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.352850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.352931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.352956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.353069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.353095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.353184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.353211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.353319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.353357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.353451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.353477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.353561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.353587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.353672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.353698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.353787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.353813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.353907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.353934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.354073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.354192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.354296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.354420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.354533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.354634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.354776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.570 [2024-11-26 18:27:24.354886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.570 qpair failed and we were unable to recover it. 00:31:36.570 [2024-11-26 18:27:24.354970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.354996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.355089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.355115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.355199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.355225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.355301] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.571 [2024-11-26 18:27:24.355324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.355353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.355449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.355475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.355566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.355593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.355684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.355710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.355796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.355824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.355937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.355963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.356077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.356194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.356308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.356422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.356534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.356642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.356755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.356867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.356977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.357141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.357252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.357375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.357492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.357630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.357738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.357851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.357959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.357984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.358071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.358097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.358202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.358227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.358319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.358348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.358439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.358465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.358554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.358579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.358666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.358691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.358799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.358824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.358914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.358939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.359021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.359049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.359181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.359220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.359317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.359345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.359436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.571 [2024-11-26 18:27:24.359462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.571 qpair failed and we were unable to recover it. 00:31:36.571 [2024-11-26 18:27:24.359547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.359572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.359652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.359678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.359760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.359787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.359878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.359906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.359995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.360139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.360244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.360363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.360474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.360583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.360715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.360825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.360960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.360986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.361967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.361995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.362085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.362111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.362237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.362264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.362355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.362381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.362493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.362518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.362605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.362630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.362740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.362765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.362847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.362872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.362956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.362981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.363065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.363092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.363175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.363202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.363319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.363347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.363447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.363474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.572 [2024-11-26 18:27:24.363567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.363594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 [2024-11-26 18:27:24.363677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.363703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:36.572 [2024-11-26 18:27:24.363815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.572 [2024-11-26 18:27:24.363842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.572 qpair failed and we were unable to recover it. 00:31:36.572 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.572 [2024-11-26 18:27:24.363929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.363955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.573 [2024-11-26 18:27:24.364033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.364059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.364172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.364199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.364288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.364322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.364438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.364464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.364548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.364574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.364661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.364686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.364770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.364802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.364890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.364915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.364996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.365114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.365232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.365347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.365458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.365590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.365691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.365798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.365911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.365936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.366024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.366051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.366145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.366172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.366263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.366288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.366399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.366438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.366535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.366563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.366654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.366680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.366789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.366815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.366931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.366957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.367037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.367063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.367146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.367173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.367268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.367296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.367425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.367451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.367539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.367564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.367681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.367706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.367791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.367816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.367905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.367933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.368051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.368086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.368205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.368231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.368318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.368345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.368456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.573 [2024-11-26 18:27:24.368482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.573 qpair failed and we were unable to recover it. 00:31:36.573 [2024-11-26 18:27:24.368592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.368618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.368698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.368724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.368818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.368844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.368927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.368953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.369039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.369066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.369182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.369208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.369290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.369319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.369403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.369429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.369506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.369532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.369643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.369668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.369758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.369783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.369901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.369926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.370032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.370137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.370242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.370376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.370481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.370589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.370694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.370863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.370989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.371015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.371098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.371124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.371212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.371238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.371348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.371393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.371486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.371514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.574 [2024-11-26 18:27:24.371621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.371647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.574 [2024-11-26 18:27:24.371724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.371750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.371838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.574 [2024-11-26 18:27:24.371864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.371944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.371969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.574 [2024-11-26 18:27:24.372056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.372083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.372165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.574 [2024-11-26 18:27:24.372191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.574 qpair failed and we were unable to recover it. 00:31:36.574 [2024-11-26 18:27:24.372310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.372337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.372431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.372456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.372543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.372570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.372680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.372705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.372821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.372849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.372931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.372958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.373060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.373098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.373224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.373253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.373344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.373371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.373454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.373479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.373589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.373615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.373694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.373719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.373802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.373829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.373922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.373948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.374941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.374969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.375083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.375109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.375202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.375229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.375319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.375346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.375430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.375456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.375540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.375567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.375655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.375682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.375768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.375793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.375873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.375900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.376012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.376039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.376125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.376150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.376234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.376259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.376357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.376384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.376471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.376496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.376607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.376633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.376742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.575 [2024-11-26 18:27:24.376768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.575 qpair failed and we were unable to recover it. 00:31:36.575 [2024-11-26 18:27:24.376852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.376878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.376953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.376978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.377073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.377113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.377198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.377225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.377313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.377341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.377426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.377452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.377534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.377565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.377648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.377673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.377783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.377809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.377888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.377914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.378948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.378974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.379083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.379111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.379211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.379239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.379338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.379366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.379452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.379478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.379566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.576 [2024-11-26 18:27:24.379592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.379679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.379706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.576 [2024-11-26 18:27:24.379790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.379816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.379921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.576 [2024-11-26 18:27:24.379946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.380026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.380051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.576 [2024-11-26 18:27:24.380159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.380184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.380280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.380323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5078000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.380432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.380458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.380548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.380580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.380671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.380696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.380775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.380801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.380916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.380944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.381032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.381059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.576 [2024-11-26 18:27:24.381155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.576 [2024-11-26 18:27:24.381181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.576 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.381266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.381292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.381411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.381437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.381523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.381549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.381631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.381657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.381735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.381761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.381842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.381868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.381953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.381979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.382075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.382114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.382206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.382233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.382349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.382378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.382466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.382491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.382569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.382595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.382677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.382704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5080000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.382813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.382839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.382919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.382945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.383019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.383045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.383127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.383152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.383234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.383259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfaffa0 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.383353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:36.577 [2024-11-26 18:27:24.383382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5074000b90 with addr=10.0.0.2, port=4420 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.383864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.577 [2024-11-26 18:27:24.386155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.577 [2024-11-26 18:27:24.386274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.577 [2024-11-26 18:27:24.386315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.577 [2024-11-26 18:27:24.386335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.577 [2024-11-26 18:27:24.386355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.577 [2024-11-26 18:27:24.386395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.577 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:36.577 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:36.577 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:36.577 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:36.577 18:27:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 724226 00:31:36.577 [2024-11-26 18:27:24.395946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.577 [2024-11-26 18:27:24.396032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.577 [2024-11-26 18:27:24.396060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.577 [2024-11-26 18:27:24.396075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.577 [2024-11-26 18:27:24.396088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.577 [2024-11-26 18:27:24.396119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.406025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.577 [2024-11-26 18:27:24.406143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.577 [2024-11-26 18:27:24.406170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.577 [2024-11-26 18:27:24.406185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.577 [2024-11-26 18:27:24.406199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.577 [2024-11-26 18:27:24.406230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.416001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.577 [2024-11-26 18:27:24.416098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.577 [2024-11-26 18:27:24.416126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.577 [2024-11-26 18:27:24.416141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.577 [2024-11-26 18:27:24.416160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.577 [2024-11-26 18:27:24.416191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.425903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.577 [2024-11-26 18:27:24.425993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.577 [2024-11-26 18:27:24.426020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.577 [2024-11-26 18:27:24.426035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.577 [2024-11-26 18:27:24.426049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.577 [2024-11-26 18:27:24.426080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.577 qpair failed and we were unable to recover it. 00:31:36.577 [2024-11-26 18:27:24.435934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.577 [2024-11-26 18:27:24.436022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.577 [2024-11-26 18:27:24.436052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.577 [2024-11-26 18:27:24.436068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.577 [2024-11-26 18:27:24.436081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.578 [2024-11-26 18:27:24.436111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.578 qpair failed and we were unable to recover it. 00:31:36.578 [2024-11-26 18:27:24.445956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.578 [2024-11-26 18:27:24.446041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.578 [2024-11-26 18:27:24.446068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.578 [2024-11-26 18:27:24.446082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.578 [2024-11-26 18:27:24.446096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.578 [2024-11-26 18:27:24.446126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.578 qpair failed and we were unable to recover it. 00:31:36.578 [2024-11-26 18:27:24.456004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.578 [2024-11-26 18:27:24.456096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.578 [2024-11-26 18:27:24.456122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.578 [2024-11-26 18:27:24.456137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.578 [2024-11-26 18:27:24.456150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.578 [2024-11-26 18:27:24.456181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.578 qpair failed and we were unable to recover it. 00:31:36.578 [2024-11-26 18:27:24.466081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.578 [2024-11-26 18:27:24.466166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.578 [2024-11-26 18:27:24.466198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.578 [2024-11-26 18:27:24.466213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.578 [2024-11-26 18:27:24.466226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.578 [2024-11-26 18:27:24.466256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.578 qpair failed and we were unable to recover it. 00:31:36.578 [2024-11-26 18:27:24.476086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.578 [2024-11-26 18:27:24.476173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.578 [2024-11-26 18:27:24.476200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.578 [2024-11-26 18:27:24.476215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.578 [2024-11-26 18:27:24.476228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.578 [2024-11-26 18:27:24.476258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.578 qpair failed and we were unable to recover it. 00:31:36.578 [2024-11-26 18:27:24.486107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.578 [2024-11-26 18:27:24.486189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.578 [2024-11-26 18:27:24.486217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.578 [2024-11-26 18:27:24.486231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.578 [2024-11-26 18:27:24.486244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.578 [2024-11-26 18:27:24.486277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.578 qpair failed and we were unable to recover it. 00:31:36.578 [2024-11-26 18:27:24.496197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.578 [2024-11-26 18:27:24.496287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.578 [2024-11-26 18:27:24.496326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.578 [2024-11-26 18:27:24.496343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.578 [2024-11-26 18:27:24.496355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.578 [2024-11-26 18:27:24.496387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.578 qpair failed and we were unable to recover it. 00:31:36.578 [2024-11-26 18:27:24.506210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.578 [2024-11-26 18:27:24.506308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.578 [2024-11-26 18:27:24.506374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.578 [2024-11-26 18:27:24.506390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.578 [2024-11-26 18:27:24.506409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:36.578 [2024-11-26 18:27:24.506457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:36.578 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.516179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.516274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.516315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.838 [2024-11-26 18:27:24.516343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.838 [2024-11-26 18:27:24.516369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5078000b90 00:31:36.838 [2024-11-26 18:27:24.516420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:36.838 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.526207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.526297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.526342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.838 [2024-11-26 18:27:24.526359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.838 [2024-11-26 18:27:24.526373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.838 [2024-11-26 18:27:24.526405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.838 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.536282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.536397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.536425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.838 [2024-11-26 18:27:24.536449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.838 [2024-11-26 18:27:24.536463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.838 [2024-11-26 18:27:24.536492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.838 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.546261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.546353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.546380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.838 [2024-11-26 18:27:24.546395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.838 [2024-11-26 18:27:24.546408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.838 [2024-11-26 18:27:24.546440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.838 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.556282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.556377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.556403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.838 [2024-11-26 18:27:24.556417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.838 [2024-11-26 18:27:24.556430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.838 [2024-11-26 18:27:24.556460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.838 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.566318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.566411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.566437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.838 [2024-11-26 18:27:24.566451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.838 [2024-11-26 18:27:24.566465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.838 [2024-11-26 18:27:24.566494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.838 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.576383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.576488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.576514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.838 [2024-11-26 18:27:24.576528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.838 [2024-11-26 18:27:24.576541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.838 [2024-11-26 18:27:24.576571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.838 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.586410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.586505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.586531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.838 [2024-11-26 18:27:24.586546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.838 [2024-11-26 18:27:24.586566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.838 [2024-11-26 18:27:24.586594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.838 qpair failed and we were unable to recover it. 00:31:36.838 [2024-11-26 18:27:24.596409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.838 [2024-11-26 18:27:24.596497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.838 [2024-11-26 18:27:24.596531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.596546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.596559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.596587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.606427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.606514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.606539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.606553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.606566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.606597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.616473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.616564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.616590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.616605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.616618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.616646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.626585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.626675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.626701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.626715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.626728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.626757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.636522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.636612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.636639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.636653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.636671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.636701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.646670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.646802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.646829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.646843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.646856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.646885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.656582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.656672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.656697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.656711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.656725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.656753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.666615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.666704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.666730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.666744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.666758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.666786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.676638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.676722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.676748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.676762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.839 [2024-11-26 18:27:24.676775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.839 [2024-11-26 18:27:24.676805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.839 qpair failed and we were unable to recover it. 00:31:36.839 [2024-11-26 18:27:24.686659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.839 [2024-11-26 18:27:24.686746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.839 [2024-11-26 18:27:24.686772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.839 [2024-11-26 18:27:24.686787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.686800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.686829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.696702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.696789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.696816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.696830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.696844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.696872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.706724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.706853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.706879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.706893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.706907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.706937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.716730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.716815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.716840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.716854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.716867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.716896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.726793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.726900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.726931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.726946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.726959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.726987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.736831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.736937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.736962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.736976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.736989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.737017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.746822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.746916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.746940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.746955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.746968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.746996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.756909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.757032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.757057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.757071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.757085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.757113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.766892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.766982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.767008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.767022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.840 [2024-11-26 18:27:24.767041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.840 [2024-11-26 18:27:24.767070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.840 qpair failed and we were unable to recover it. 00:31:36.840 [2024-11-26 18:27:24.777000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.840 [2024-11-26 18:27:24.777089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.840 [2024-11-26 18:27:24.777115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.840 [2024-11-26 18:27:24.777129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.841 [2024-11-26 18:27:24.777142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.841 [2024-11-26 18:27:24.777171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.841 qpair failed and we were unable to recover it. 00:31:36.841 [2024-11-26 18:27:24.786972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.841 [2024-11-26 18:27:24.787089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.841 [2024-11-26 18:27:24.787115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.841 [2024-11-26 18:27:24.787129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.841 [2024-11-26 18:27:24.787142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.841 [2024-11-26 18:27:24.787172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.841 qpair failed and we were unable to recover it. 00:31:36.841 [2024-11-26 18:27:24.796969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.841 [2024-11-26 18:27:24.797097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.841 [2024-11-26 18:27:24.797124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.841 [2024-11-26 18:27:24.797138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.841 [2024-11-26 18:27:24.797151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.841 [2024-11-26 18:27:24.797181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.841 qpair failed and we were unable to recover it. 00:31:36.841 [2024-11-26 18:27:24.806996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.841 [2024-11-26 18:27:24.807078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.841 [2024-11-26 18:27:24.807105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.841 [2024-11-26 18:27:24.807119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.841 [2024-11-26 18:27:24.807132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.841 [2024-11-26 18:27:24.807161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.841 qpair failed and we were unable to recover it. 00:31:36.841 [2024-11-26 18:27:24.817030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.841 [2024-11-26 18:27:24.817121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.841 [2024-11-26 18:27:24.817147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.841 [2024-11-26 18:27:24.817162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.841 [2024-11-26 18:27:24.817175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.841 [2024-11-26 18:27:24.817204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.841 qpair failed and we were unable to recover it. 00:31:36.841 [2024-11-26 18:27:24.827045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.841 [2024-11-26 18:27:24.827132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.841 [2024-11-26 18:27:24.827156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.841 [2024-11-26 18:27:24.827171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.841 [2024-11-26 18:27:24.827184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.841 [2024-11-26 18:27:24.827213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.841 qpair failed and we were unable to recover it. 00:31:36.841 [2024-11-26 18:27:24.837096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:36.841 [2024-11-26 18:27:24.837186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:36.841 [2024-11-26 18:27:24.837214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:36.841 [2024-11-26 18:27:24.837231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:36.841 [2024-11-26 18:27:24.837245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:36.841 [2024-11-26 18:27:24.837275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:36.841 qpair failed and we were unable to recover it. 00:31:37.100 [2024-11-26 18:27:24.847122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.847247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.847273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.847287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.847300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.847341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.857138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.857228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.857260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.857275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.857288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.857324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.867205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.867293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.867327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.867342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.867361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.867390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.877237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.877329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.877366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.877381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.877394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.877423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.887221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.887324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.887353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.887369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.887382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.887412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.897282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.897396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.897423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.897437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.897456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.897487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.907325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.907436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.907461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.907476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.907490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.907519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.917343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.917432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.917457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.917471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.917484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.917513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.927355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.927444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.927469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.927484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.927496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.927525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.937402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.937517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.937543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.937557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.937570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.937598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.947401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.947483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.947509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.947523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.947536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.947564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.957461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.957585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.957610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.957625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.957638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.957666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.101 [2024-11-26 18:27:24.967449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.101 [2024-11-26 18:27:24.967534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.101 [2024-11-26 18:27:24.967560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.101 [2024-11-26 18:27:24.967575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.101 [2024-11-26 18:27:24.967587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.101 [2024-11-26 18:27:24.967616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.101 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:24.977499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:24.977592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:24.977618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:24.977632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:24.977645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:24.977674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:24.987524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:24.987606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:24.987638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:24.987653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:24.987665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:24.987696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:24.997599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:24.997685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:24.997719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:24.997734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:24.997746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:24.997775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.007551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.007647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.007673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.007688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.007700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.007728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.017640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.017737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.017762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.017776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.017788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.017816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.027628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.027713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.027739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.027754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.027773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.027802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.037669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.037763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.037792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.037809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.037822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.037851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.047697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.047816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.047842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.047856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.047869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.047898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.057720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.057815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.057841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.057855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.057867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.057897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.067799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.067893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.067919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.067933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.067945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.067974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.077785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.077874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.077900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.077915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.077928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.077956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.087802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.087883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.087909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.087924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.087936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.087965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.097817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.102 [2024-11-26 18:27:25.097906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.102 [2024-11-26 18:27:25.097932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.102 [2024-11-26 18:27:25.097946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.102 [2024-11-26 18:27:25.097960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.102 [2024-11-26 18:27:25.097989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.102 qpair failed and we were unable to recover it. 00:31:37.102 [2024-11-26 18:27:25.107969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.103 [2024-11-26 18:27:25.108058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.103 [2024-11-26 18:27:25.108084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.103 [2024-11-26 18:27:25.108098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.103 [2024-11-26 18:27:25.108111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.103 [2024-11-26 18:27:25.108140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.103 qpair failed and we were unable to recover it. 00:31:37.362 [2024-11-26 18:27:25.117904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.362 [2024-11-26 18:27:25.117993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.362 [2024-11-26 18:27:25.118024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.362 [2024-11-26 18:27:25.118040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.362 [2024-11-26 18:27:25.118052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.362 [2024-11-26 18:27:25.118081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.362 qpair failed and we were unable to recover it. 00:31:37.362 [2024-11-26 18:27:25.127884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.362 [2024-11-26 18:27:25.127969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.362 [2024-11-26 18:27:25.127994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.362 [2024-11-26 18:27:25.128008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.362 [2024-11-26 18:27:25.128021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.362 [2024-11-26 18:27:25.128050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.362 qpair failed and we were unable to recover it. 00:31:37.362 [2024-11-26 18:27:25.137931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.362 [2024-11-26 18:27:25.138042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.362 [2024-11-26 18:27:25.138066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.362 [2024-11-26 18:27:25.138080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.362 [2024-11-26 18:27:25.138093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.362 [2024-11-26 18:27:25.138121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.362 qpair failed and we were unable to recover it. 00:31:37.362 [2024-11-26 18:27:25.147949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.362 [2024-11-26 18:27:25.148039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.362 [2024-11-26 18:27:25.148066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.362 [2024-11-26 18:27:25.148081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.362 [2024-11-26 18:27:25.148093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.362 [2024-11-26 18:27:25.148122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.362 qpair failed and we were unable to recover it. 00:31:37.362 [2024-11-26 18:27:25.158394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.362 [2024-11-26 18:27:25.158528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.362 [2024-11-26 18:27:25.158553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.362 [2024-11-26 18:27:25.158567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.158586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.158615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.168073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.168160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.168185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.168199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.168212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.168242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.178132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.178249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.178275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.178290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.178309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.178340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.188162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.188296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.188335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.188349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.188362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.188391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.198103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.198224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.198249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.198263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.198275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.198312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.208154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.208271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.208297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.208320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.208333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.208362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.218212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.218327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.218357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.218372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.218385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.218415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.228235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.228335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.228362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.228377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.228390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.228419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.238212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.238294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.238326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.238342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.238354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.238383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.248248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.248364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.248398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.248413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.248427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.248456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.258282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.258395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.258421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.258435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.258448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.258479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.268348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.268456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.268482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.268496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.268509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.268539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.278350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.278434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.278460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.278474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.278487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.363 [2024-11-26 18:27:25.278518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.363 qpair failed and we were unable to recover it. 00:31:37.363 [2024-11-26 18:27:25.288367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.363 [2024-11-26 18:27:25.288459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.363 [2024-11-26 18:27:25.288485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.363 [2024-11-26 18:27:25.288499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.363 [2024-11-26 18:27:25.288517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.288547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.364 [2024-11-26 18:27:25.298387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.364 [2024-11-26 18:27:25.298523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.364 [2024-11-26 18:27:25.298548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.364 [2024-11-26 18:27:25.298563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.364 [2024-11-26 18:27:25.298576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.298604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.364 [2024-11-26 18:27:25.308400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.364 [2024-11-26 18:27:25.308496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.364 [2024-11-26 18:27:25.308522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.364 [2024-11-26 18:27:25.308536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.364 [2024-11-26 18:27:25.308549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.308578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.364 [2024-11-26 18:27:25.318419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.364 [2024-11-26 18:27:25.318500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.364 [2024-11-26 18:27:25.318526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.364 [2024-11-26 18:27:25.318540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.364 [2024-11-26 18:27:25.318553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.318581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.364 [2024-11-26 18:27:25.328444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.364 [2024-11-26 18:27:25.328554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.364 [2024-11-26 18:27:25.328580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.364 [2024-11-26 18:27:25.328595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.364 [2024-11-26 18:27:25.328608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.328636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.364 [2024-11-26 18:27:25.338494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.364 [2024-11-26 18:27:25.338586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.364 [2024-11-26 18:27:25.338612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.364 [2024-11-26 18:27:25.338626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.364 [2024-11-26 18:27:25.338639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.338668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.364 [2024-11-26 18:27:25.348539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.364 [2024-11-26 18:27:25.348630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.364 [2024-11-26 18:27:25.348656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.364 [2024-11-26 18:27:25.348670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.364 [2024-11-26 18:27:25.348683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.348712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.364 [2024-11-26 18:27:25.358559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.364 [2024-11-26 18:27:25.358693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.364 [2024-11-26 18:27:25.358718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.364 [2024-11-26 18:27:25.358733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.364 [2024-11-26 18:27:25.358745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.358774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.364 [2024-11-26 18:27:25.368552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.364 [2024-11-26 18:27:25.368631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.364 [2024-11-26 18:27:25.368656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.364 [2024-11-26 18:27:25.368671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.364 [2024-11-26 18:27:25.368684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.364 [2024-11-26 18:27:25.368713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.364 qpair failed and we were unable to recover it. 00:31:37.624 [2024-11-26 18:27:25.378634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.624 [2024-11-26 18:27:25.378721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.624 [2024-11-26 18:27:25.378752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.624 [2024-11-26 18:27:25.378766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.624 [2024-11-26 18:27:25.378780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.624 [2024-11-26 18:27:25.378809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-11-26 18:27:25.388629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.624 [2024-11-26 18:27:25.388708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.624 [2024-11-26 18:27:25.388734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.624 [2024-11-26 18:27:25.388748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.624 [2024-11-26 18:27:25.388761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.624 [2024-11-26 18:27:25.388790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-11-26 18:27:25.398690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.624 [2024-11-26 18:27:25.398779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.624 [2024-11-26 18:27:25.398805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.624 [2024-11-26 18:27:25.398819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.624 [2024-11-26 18:27:25.398832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.624 [2024-11-26 18:27:25.398861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-11-26 18:27:25.408694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.624 [2024-11-26 18:27:25.408803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.624 [2024-11-26 18:27:25.408829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.624 [2024-11-26 18:27:25.408843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.624 [2024-11-26 18:27:25.408856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.624 [2024-11-26 18:27:25.408887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-11-26 18:27:25.418777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.624 [2024-11-26 18:27:25.418866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.624 [2024-11-26 18:27:25.418892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.624 [2024-11-26 18:27:25.418907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.624 [2024-11-26 18:27:25.418926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.624 [2024-11-26 18:27:25.418955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-11-26 18:27:25.428786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.624 [2024-11-26 18:27:25.428876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.624 [2024-11-26 18:27:25.428902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.624 [2024-11-26 18:27:25.428916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.624 [2024-11-26 18:27:25.428929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.624 [2024-11-26 18:27:25.428958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.624 qpair failed and we were unable to recover it. 00:31:37.624 [2024-11-26 18:27:25.438825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.624 [2024-11-26 18:27:25.438910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.624 [2024-11-26 18:27:25.438935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.438949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.438962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.438991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.448812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.625 [2024-11-26 18:27:25.448892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.625 [2024-11-26 18:27:25.448918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.448933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.448945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.448973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.458885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.625 [2024-11-26 18:27:25.458974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.625 [2024-11-26 18:27:25.458999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.459013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.459026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.459055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.468906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.625 [2024-11-26 18:27:25.468998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.625 [2024-11-26 18:27:25.469023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.469037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.469050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.469079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.478930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.625 [2024-11-26 18:27:25.479048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.625 [2024-11-26 18:27:25.479076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.479095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.479108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.479139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.489060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.625 [2024-11-26 18:27:25.489151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.625 [2024-11-26 18:27:25.489177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.489192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.489205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.489234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.498976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.625 [2024-11-26 18:27:25.499068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.625 [2024-11-26 18:27:25.499094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.499109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.499121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.499150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.509010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.625 [2024-11-26 18:27:25.509097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.625 [2024-11-26 18:27:25.509128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.509143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.509156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.509184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.519035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.625 [2024-11-26 18:27:25.519159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.625 [2024-11-26 18:27:25.519184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.625 [2024-11-26 18:27:25.519198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.625 [2024-11-26 18:27:25.519212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.625 [2024-11-26 18:27:25.519240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.625 qpair failed and we were unable to recover it. 00:31:37.625 [2024-11-26 18:27:25.529044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.529129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.529155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.529169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.529181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.529210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.626 [2024-11-26 18:27:25.539105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.539194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.539220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.539234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.539247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.539275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.626 [2024-11-26 18:27:25.549090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.549172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.549198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.549213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.549231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.549261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.626 [2024-11-26 18:27:25.559152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.559238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.559264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.559279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.559291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.559330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.626 [2024-11-26 18:27:25.569144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.569224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.569249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.569264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.569276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.569316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.626 [2024-11-26 18:27:25.579262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.579363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.579388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.579402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.579415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.579445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.626 [2024-11-26 18:27:25.589213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.589308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.589334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.589349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.589363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.589392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.626 [2024-11-26 18:27:25.599247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.599377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.599403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.599417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.599430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.599459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.626 [2024-11-26 18:27:25.609276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.626 [2024-11-26 18:27:25.609375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.626 [2024-11-26 18:27:25.609402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.626 [2024-11-26 18:27:25.609417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.626 [2024-11-26 18:27:25.609430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.626 [2024-11-26 18:27:25.609460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.626 qpair failed and we were unable to recover it. 00:31:37.627 [2024-11-26 18:27:25.619340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.627 [2024-11-26 18:27:25.619476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.627 [2024-11-26 18:27:25.619502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.627 [2024-11-26 18:27:25.619516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.627 [2024-11-26 18:27:25.619529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.627 [2024-11-26 18:27:25.619558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.627 qpair failed and we were unable to recover it. 00:31:37.627 [2024-11-26 18:27:25.629337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.627 [2024-11-26 18:27:25.629424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.627 [2024-11-26 18:27:25.629451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.627 [2024-11-26 18:27:25.629465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.627 [2024-11-26 18:27:25.629478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.627 [2024-11-26 18:27:25.629506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.627 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.639448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.639577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.639609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.639624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.639637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.639665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.649400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.649517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.649543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.649558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.649571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.649600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.659468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.659591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.659617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.659631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.659644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.659674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.669440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.669533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.669559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.669573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.669586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.669616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.679589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.679669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.679695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.679709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.679728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.679758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.689522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.689613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.689639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.689654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.689666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.689695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.699573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.699697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.699722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.699736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.699749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.699778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.709547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.709656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.709681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.709695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.709708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.709736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.719617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.719721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.719747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.719762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.719775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.719804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.729617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.729710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.729735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.729749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.729763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.729792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.739755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.739847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.739872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.739887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.739899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.739928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.749706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.886 [2024-11-26 18:27:25.749786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.886 [2024-11-26 18:27:25.749812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.886 [2024-11-26 18:27:25.749826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.886 [2024-11-26 18:27:25.749839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.886 [2024-11-26 18:27:25.749867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.886 qpair failed and we were unable to recover it. 00:31:37.886 [2024-11-26 18:27:25.759696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.759791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.759816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.759831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.759843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.759874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.769728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.769857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.769888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.769903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.769916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.769945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.779872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.780001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.780027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.780041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.780054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.780083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.789810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.789919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.789945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.789959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.789972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.790001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.799870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.799955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.799984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.799999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.800012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.800042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.809867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.809971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.809997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.810012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.810030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.810061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.819905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.819994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.820019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.820033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.820046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.820075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.829901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.829994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.830020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.830034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.830047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.830076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.839937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.840066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.840092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.840107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.840119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.840148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.850001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.850088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.850113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.850128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.850141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.850170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.860028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.860163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.860188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.860203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.860216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.860244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.870047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.870164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.870189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.870204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.870217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.870245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.880042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.880126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.887 [2024-11-26 18:27:25.880152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.887 [2024-11-26 18:27:25.880166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.887 [2024-11-26 18:27:25.880179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.887 [2024-11-26 18:27:25.880208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.887 qpair failed and we were unable to recover it. 00:31:37.887 [2024-11-26 18:27:25.890069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:37.887 [2024-11-26 18:27:25.890185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:37.888 [2024-11-26 18:27:25.890212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:37.888 [2024-11-26 18:27:25.890227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:37.888 [2024-11-26 18:27:25.890240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:37.888 [2024-11-26 18:27:25.890268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:37.888 qpair failed and we were unable to recover it. 00:31:38.147 [2024-11-26 18:27:25.900158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.147 [2024-11-26 18:27:25.900261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.147 [2024-11-26 18:27:25.900295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.147 [2024-11-26 18:27:25.900321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.147 [2024-11-26 18:27:25.900335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.147 [2024-11-26 18:27:25.900364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.147 qpair failed and we were unable to recover it. 00:31:38.147 [2024-11-26 18:27:25.910148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.147 [2024-11-26 18:27:25.910242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.147 [2024-11-26 18:27:25.910266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.147 [2024-11-26 18:27:25.910280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.147 [2024-11-26 18:27:25.910293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.147 [2024-11-26 18:27:25.910329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.147 qpair failed and we were unable to recover it. 00:31:38.147 [2024-11-26 18:27:25.920157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.147 [2024-11-26 18:27:25.920284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.147 [2024-11-26 18:27:25.920317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.147 [2024-11-26 18:27:25.920333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.147 [2024-11-26 18:27:25.920346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.147 [2024-11-26 18:27:25.920375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.147 qpair failed and we were unable to recover it. 00:31:38.147 [2024-11-26 18:27:25.930202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.147 [2024-11-26 18:27:25.930315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.147 [2024-11-26 18:27:25.930341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.147 [2024-11-26 18:27:25.930356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.147 [2024-11-26 18:27:25.930369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.147 [2024-11-26 18:27:25.930398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.147 qpair failed and we were unable to recover it. 00:31:38.147 [2024-11-26 18:27:25.940236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.147 [2024-11-26 18:27:25.940355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.147 [2024-11-26 18:27:25.940381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.147 [2024-11-26 18:27:25.940395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.147 [2024-11-26 18:27:25.940414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.147 [2024-11-26 18:27:25.940444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.147 qpair failed and we were unable to recover it. 00:31:38.147 [2024-11-26 18:27:25.950332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.147 [2024-11-26 18:27:25.950420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.147 [2024-11-26 18:27:25.950446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.147 [2024-11-26 18:27:25.950460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.147 [2024-11-26 18:27:25.950472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.147 [2024-11-26 18:27:25.950501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.147 qpair failed and we were unable to recover it. 00:31:38.147 [2024-11-26 18:27:25.960291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.147 [2024-11-26 18:27:25.960406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.147 [2024-11-26 18:27:25.960431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.147 [2024-11-26 18:27:25.960445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.147 [2024-11-26 18:27:25.960458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.147 [2024-11-26 18:27:25.960487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.147 qpair failed and we were unable to recover it. 00:31:38.147 [2024-11-26 18:27:25.970337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.147 [2024-11-26 18:27:25.970449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.147 [2024-11-26 18:27:25.970475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.147 [2024-11-26 18:27:25.970490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.147 [2024-11-26 18:27:25.970502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.147 [2024-11-26 18:27:25.970533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.147 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:25.980432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:25.980528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:25.980553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:25.980568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:25.980581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:25.980610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:25.990361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:25.990450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:25.990476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:25.990492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:25.990505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:25.990534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.000385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.000470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.000496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.000511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.000524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.000553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.010420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.010505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.010531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.010545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.010558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.010587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.020470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.020586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.020621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.020635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.020646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.020675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.030476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.030606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.030637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.030652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.030665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.030693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.040531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.040656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.040683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.040697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.040710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.040739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.050578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.050666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.050692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.050707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.050720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.050748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.060580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.060675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.060700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.060714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.060732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.060762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.070633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.070714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.070739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.070753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.070773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.070802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.080613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.080705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.080731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.080745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.080759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.080787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.090690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.090781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.090808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.090823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.090836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.090865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.100687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.148 [2024-11-26 18:27:26.100785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.148 [2024-11-26 18:27:26.100811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.148 [2024-11-26 18:27:26.100826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.148 [2024-11-26 18:27:26.100839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.148 [2024-11-26 18:27:26.100870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.148 qpair failed and we were unable to recover it. 00:31:38.148 [2024-11-26 18:27:26.110734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.149 [2024-11-26 18:27:26.110856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.149 [2024-11-26 18:27:26.110882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.149 [2024-11-26 18:27:26.110896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.149 [2024-11-26 18:27:26.110909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.149 [2024-11-26 18:27:26.110939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.149 qpair failed and we were unable to recover it. 00:31:38.149 [2024-11-26 18:27:26.120766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.149 [2024-11-26 18:27:26.120854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.149 [2024-11-26 18:27:26.120880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.149 [2024-11-26 18:27:26.120894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.149 [2024-11-26 18:27:26.120907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.149 [2024-11-26 18:27:26.120937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.149 qpair failed and we were unable to recover it. 00:31:38.149 [2024-11-26 18:27:26.130786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.149 [2024-11-26 18:27:26.130878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.149 [2024-11-26 18:27:26.130903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.149 [2024-11-26 18:27:26.130918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.149 [2024-11-26 18:27:26.130931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.149 [2024-11-26 18:27:26.130960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.149 qpair failed and we were unable to recover it. 00:31:38.149 [2024-11-26 18:27:26.140835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.149 [2024-11-26 18:27:26.140924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.149 [2024-11-26 18:27:26.140949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.149 [2024-11-26 18:27:26.140964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.149 [2024-11-26 18:27:26.140977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.149 [2024-11-26 18:27:26.141005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.149 qpair failed and we were unable to recover it. 00:31:38.149 [2024-11-26 18:27:26.150896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.149 [2024-11-26 18:27:26.150991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.149 [2024-11-26 18:27:26.151016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.149 [2024-11-26 18:27:26.151030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.149 [2024-11-26 18:27:26.151044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.149 [2024-11-26 18:27:26.151073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.149 qpair failed and we were unable to recover it. 00:31:38.408 [2024-11-26 18:27:26.160860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.408 [2024-11-26 18:27:26.160955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.408 [2024-11-26 18:27:26.160985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.408 [2024-11-26 18:27:26.161000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.408 [2024-11-26 18:27:26.161013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.408 [2024-11-26 18:27:26.161043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.408 qpair failed and we were unable to recover it. 00:31:38.408 [2024-11-26 18:27:26.170876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.408 [2024-11-26 18:27:26.170963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.408 [2024-11-26 18:27:26.170988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.408 [2024-11-26 18:27:26.171003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.408 [2024-11-26 18:27:26.171016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.408 [2024-11-26 18:27:26.171045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.408 qpair failed and we were unable to recover it. 00:31:38.408 [2024-11-26 18:27:26.180975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.408 [2024-11-26 18:27:26.181080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.408 [2024-11-26 18:27:26.181105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.408 [2024-11-26 18:27:26.181119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.408 [2024-11-26 18:27:26.181133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.408 [2024-11-26 18:27:26.181161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.408 qpair failed and we were unable to recover it. 00:31:38.408 [2024-11-26 18:27:26.191028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.408 [2024-11-26 18:27:26.191109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.408 [2024-11-26 18:27:26.191135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.408 [2024-11-26 18:27:26.191150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.408 [2024-11-26 18:27:26.191163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.408 [2024-11-26 18:27:26.191192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.408 qpair failed and we were unable to recover it. 00:31:38.408 [2024-11-26 18:27:26.200965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.408 [2024-11-26 18:27:26.201046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.408 [2024-11-26 18:27:26.201071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.201086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.201104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.201134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.211012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.211098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.211123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.211137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.211151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.211180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.221025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.221116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.221141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.221155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.221168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.221196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.231101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.231185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.231211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.231225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.231238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.231267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.241106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.241224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.241250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.241264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.241276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.241314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.251122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.251202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.251228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.251242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.251255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.251284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.261137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.261228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.261254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.261268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.261280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.261316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.271161] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.271242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.271267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.271282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.271295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.271335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.281183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.281313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.281339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.281353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.281366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.281395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.291255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.291373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.291404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.291419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.291432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.291461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.301261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.301376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.301402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.301417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.301430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.301458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.311293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.311388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.311413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.311428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.311441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.311469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.321325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.321408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.321433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.321448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.321460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.409 [2024-11-26 18:27:26.321489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.409 qpair failed and we were unable to recover it. 00:31:38.409 [2024-11-26 18:27:26.331403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.409 [2024-11-26 18:27:26.331489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.409 [2024-11-26 18:27:26.331515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.409 [2024-11-26 18:27:26.331529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.409 [2024-11-26 18:27:26.331548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.331578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.410 [2024-11-26 18:27:26.341384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.410 [2024-11-26 18:27:26.341501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.410 [2024-11-26 18:27:26.341526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.410 [2024-11-26 18:27:26.341541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.410 [2024-11-26 18:27:26.341554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.341582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.410 [2024-11-26 18:27:26.351429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.410 [2024-11-26 18:27:26.351518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.410 [2024-11-26 18:27:26.351544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.410 [2024-11-26 18:27:26.351558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.410 [2024-11-26 18:27:26.351571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.351600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.410 [2024-11-26 18:27:26.361460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.410 [2024-11-26 18:27:26.361549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.410 [2024-11-26 18:27:26.361574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.410 [2024-11-26 18:27:26.361588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.410 [2024-11-26 18:27:26.361601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.361629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.410 [2024-11-26 18:27:26.371445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.410 [2024-11-26 18:27:26.371533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.410 [2024-11-26 18:27:26.371558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.410 [2024-11-26 18:27:26.371573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.410 [2024-11-26 18:27:26.371586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.371614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.410 [2024-11-26 18:27:26.381492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.410 [2024-11-26 18:27:26.381580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.410 [2024-11-26 18:27:26.381605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.410 [2024-11-26 18:27:26.381620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.410 [2024-11-26 18:27:26.381632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.381660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.410 [2024-11-26 18:27:26.391534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.410 [2024-11-26 18:27:26.391623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.410 [2024-11-26 18:27:26.391648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.410 [2024-11-26 18:27:26.391662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.410 [2024-11-26 18:27:26.391675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.391703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.410 [2024-11-26 18:27:26.401563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.410 [2024-11-26 18:27:26.401651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.410 [2024-11-26 18:27:26.401677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.410 [2024-11-26 18:27:26.401691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.410 [2024-11-26 18:27:26.401704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.401733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.410 [2024-11-26 18:27:26.411559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.410 [2024-11-26 18:27:26.411641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.410 [2024-11-26 18:27:26.411667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.410 [2024-11-26 18:27:26.411681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.410 [2024-11-26 18:27:26.411694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.410 [2024-11-26 18:27:26.411723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.410 qpair failed and we were unable to recover it. 00:31:38.669 [2024-11-26 18:27:26.421611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.669 [2024-11-26 18:27:26.421705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.669 [2024-11-26 18:27:26.421737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.669 [2024-11-26 18:27:26.421752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.669 [2024-11-26 18:27:26.421765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.669 [2024-11-26 18:27:26.421796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.669 qpair failed and we were unable to recover it. 00:31:38.669 [2024-11-26 18:27:26.431644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.669 [2024-11-26 18:27:26.431776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.669 [2024-11-26 18:27:26.431801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.669 [2024-11-26 18:27:26.431816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.669 [2024-11-26 18:27:26.431828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.669 [2024-11-26 18:27:26.431857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.669 qpair failed and we were unable to recover it. 00:31:38.669 [2024-11-26 18:27:26.441654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.669 [2024-11-26 18:27:26.441742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.669 [2024-11-26 18:27:26.441767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.669 [2024-11-26 18:27:26.441781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.441794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.441822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.451682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.451801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.451827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.451842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.451855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.451884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.461791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.461916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.461942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.461956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.461974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.462003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.471752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.471848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.471873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.471888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.471900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.471929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.481793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.481881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.481907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.481921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.481934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.481963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.491818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.491904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.491930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.491944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.491957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.491987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.501856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.501947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.501972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.501987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.501999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.502028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.511895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.511995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.512020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.512035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.512048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.512076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.521871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.521996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.522022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.522036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.522049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.522077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.531951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.670 [2024-11-26 18:27:26.532071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.670 [2024-11-26 18:27:26.532097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.670 [2024-11-26 18:27:26.532112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.670 [2024-11-26 18:27:26.532124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.670 [2024-11-26 18:27:26.532153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.670 qpair failed and we were unable to recover it. 00:31:38.670 [2024-11-26 18:27:26.541935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.542048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.542073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.542087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.542100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.542128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.551949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.552044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.552077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.552092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.552105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.552134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.561985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.562071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.562096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.562110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.562123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.562152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.572006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.572093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.572118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.572132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.572145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.572174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.582034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.582126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.582151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.582165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.582178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.582206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.592089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.592214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.592240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.592254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.592273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.592309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.602081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.602172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.602199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.602213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.602226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.602257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.612138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.612256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.612281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.612296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.612317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.612353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.622156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.622247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.622273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.622287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.622300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.622342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.632169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.632308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.632334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.632348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.632360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.632390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.642212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.642341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.642368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.642382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.642394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.642424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.652308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.652386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.671 [2024-11-26 18:27:26.652412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.671 [2024-11-26 18:27:26.652427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.671 [2024-11-26 18:27:26.652440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.671 [2024-11-26 18:27:26.652468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.671 qpair failed and we were unable to recover it. 00:31:38.671 [2024-11-26 18:27:26.662273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.671 [2024-11-26 18:27:26.662388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.672 [2024-11-26 18:27:26.662414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.672 [2024-11-26 18:27:26.662428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.672 [2024-11-26 18:27:26.662441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.672 [2024-11-26 18:27:26.662469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.672 qpair failed and we were unable to recover it. 00:31:38.672 [2024-11-26 18:27:26.672275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.672 [2024-11-26 18:27:26.672367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.672 [2024-11-26 18:27:26.672393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.672 [2024-11-26 18:27:26.672407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.672 [2024-11-26 18:27:26.672420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.672 [2024-11-26 18:27:26.672448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.672 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.682298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.682395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.682425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.682440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.682453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.682482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.692344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.692437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.692464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.692478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.692491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.692522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.702424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.702547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.702573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.702587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.702600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.702629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.712402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.712485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.712511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.712525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.712538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.712566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.722450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.722534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.722559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.722573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.722591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.722621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.732475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.732560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.732586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.732601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.732613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.732642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.742538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.742650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.742675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.742690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.742703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.742732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.752515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.752639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.752667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.752684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.752698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.752728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.762556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.762646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.762672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.762687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.762699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.762728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.931 [2024-11-26 18:27:26.772595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.931 [2024-11-26 18:27:26.772675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.931 [2024-11-26 18:27:26.772701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.931 [2024-11-26 18:27:26.772715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.931 [2024-11-26 18:27:26.772728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.931 [2024-11-26 18:27:26.772757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.931 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.782606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.782720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.782745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.782759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.782772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.782800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.792654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.792741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.792767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.792781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.792794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.792822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.802646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.802780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.802805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.802819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.802832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.802860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.812682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.812762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.812793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.812809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.812822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.812851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.822725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.822815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.822841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.822856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.822869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.822898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.832821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.832909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.832934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.832949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.832962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.832990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.842780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.842861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.842886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.842901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.842914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.842943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.852786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.852870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.852896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.852910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.852929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.852958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.862926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.863036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.863061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.863075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.863088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.863117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.872848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.872927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.872953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.872968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.872981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.873009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.882905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.883027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.883053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.883068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.883081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.883109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.892929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.893050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.893076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.893090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.893104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.893132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.932 [2024-11-26 18:27:26.902962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.932 [2024-11-26 18:27:26.903050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.932 [2024-11-26 18:27:26.903076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.932 [2024-11-26 18:27:26.903090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.932 [2024-11-26 18:27:26.903103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.932 [2024-11-26 18:27:26.903131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.932 qpair failed and we were unable to recover it. 00:31:38.933 [2024-11-26 18:27:26.913008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.933 [2024-11-26 18:27:26.913103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.933 [2024-11-26 18:27:26.913129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.933 [2024-11-26 18:27:26.913143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.933 [2024-11-26 18:27:26.913156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.933 [2024-11-26 18:27:26.913185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.933 qpair failed and we were unable to recover it. 00:31:38.933 [2024-11-26 18:27:26.922985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.933 [2024-11-26 18:27:26.923069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.933 [2024-11-26 18:27:26.923094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.933 [2024-11-26 18:27:26.923108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.933 [2024-11-26 18:27:26.923121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.933 [2024-11-26 18:27:26.923149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.933 qpair failed and we were unable to recover it. 00:31:38.933 [2024-11-26 18:27:26.933022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:38.933 [2024-11-26 18:27:26.933106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:38.933 [2024-11-26 18:27:26.933131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:38.933 [2024-11-26 18:27:26.933145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:38.933 [2024-11-26 18:27:26.933158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:38.933 [2024-11-26 18:27:26.933186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:38.933 qpair failed and we were unable to recover it. 00:31:39.192 [2024-11-26 18:27:26.943073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.192 [2024-11-26 18:27:26.943164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.192 [2024-11-26 18:27:26.943194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.192 [2024-11-26 18:27:26.943210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.192 [2024-11-26 18:27:26.943223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.192 [2024-11-26 18:27:26.943251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.192 qpair failed and we were unable to recover it. 00:31:39.192 [2024-11-26 18:27:26.953092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.192 [2024-11-26 18:27:26.953176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.192 [2024-11-26 18:27:26.953202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.192 [2024-11-26 18:27:26.953216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.192 [2024-11-26 18:27:26.953230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.192 [2024-11-26 18:27:26.953258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.192 qpair failed and we were unable to recover it. 00:31:39.192 [2024-11-26 18:27:26.963148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.192 [2024-11-26 18:27:26.963233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.192 [2024-11-26 18:27:26.963258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.192 [2024-11-26 18:27:26.963273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.192 [2024-11-26 18:27:26.963286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.192 [2024-11-26 18:27:26.963323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.192 qpair failed and we were unable to recover it. 00:31:39.192 [2024-11-26 18:27:26.973142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.192 [2024-11-26 18:27:26.973256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.192 [2024-11-26 18:27:26.973282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.192 [2024-11-26 18:27:26.973296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.192 [2024-11-26 18:27:26.973317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.192 [2024-11-26 18:27:26.973347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.192 qpair failed and we were unable to recover it. 00:31:39.192 [2024-11-26 18:27:26.983176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.192 [2024-11-26 18:27:26.983264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.192 [2024-11-26 18:27:26.983290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.192 [2024-11-26 18:27:26.983311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.192 [2024-11-26 18:27:26.983331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.192 [2024-11-26 18:27:26.983362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.192 qpair failed and we were unable to recover it. 00:31:39.192 [2024-11-26 18:27:26.993231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.192 [2024-11-26 18:27:26.993339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.192 [2024-11-26 18:27:26.993365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.192 [2024-11-26 18:27:26.993379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.192 [2024-11-26 18:27:26.993392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.192 [2024-11-26 18:27:26.993420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.192 qpair failed and we were unable to recover it. 00:31:39.192 [2024-11-26 18:27:27.003218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.192 [2024-11-26 18:27:27.003311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.192 [2024-11-26 18:27:27.003337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.192 [2024-11-26 18:27:27.003351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.192 [2024-11-26 18:27:27.003364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.192 [2024-11-26 18:27:27.003393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.192 qpair failed and we were unable to recover it. 00:31:39.192 [2024-11-26 18:27:27.013271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.013369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.013396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.013411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.013423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.013452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.023285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.023386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.023411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.023425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.023436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.023465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.033327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.033408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.033433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.033448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.033461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.033489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.043387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.043503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.043532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.043548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.043561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.043591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.053367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.053454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.053479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.053493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.053506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.053535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.063410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.063544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.063570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.063584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.063597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.063626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.073442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.073522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.073553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.073568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.073581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.073610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.083485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.083607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.083632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.083646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.083659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.083687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.093481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.093570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.093596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.093611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.093624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.093653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.103539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.103663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.103688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.103702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.103716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.103745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.113547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.113627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.113652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.113667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.113685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.113717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.123570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.123652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.123677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.123691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.123704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.123733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.133728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.133816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.193 [2024-11-26 18:27:27.133842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.193 [2024-11-26 18:27:27.133857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.193 [2024-11-26 18:27:27.133869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.193 [2024-11-26 18:27:27.133898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.193 qpair failed and we were unable to recover it. 00:31:39.193 [2024-11-26 18:27:27.143702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.193 [2024-11-26 18:27:27.143814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.194 [2024-11-26 18:27:27.143843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.194 [2024-11-26 18:27:27.143860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.194 [2024-11-26 18:27:27.143873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.194 [2024-11-26 18:27:27.143903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.194 qpair failed and we were unable to recover it. 00:31:39.194 [2024-11-26 18:27:27.153656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.194 [2024-11-26 18:27:27.153741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.194 [2024-11-26 18:27:27.153767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.194 [2024-11-26 18:27:27.153782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.194 [2024-11-26 18:27:27.153795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.194 [2024-11-26 18:27:27.153825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.194 qpair failed and we were unable to recover it. 00:31:39.194 [2024-11-26 18:27:27.163693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.194 [2024-11-26 18:27:27.163795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.194 [2024-11-26 18:27:27.163821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.194 [2024-11-26 18:27:27.163836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.194 [2024-11-26 18:27:27.163850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.194 [2024-11-26 18:27:27.163879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.194 qpair failed and we were unable to recover it. 00:31:39.194 [2024-11-26 18:27:27.173692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.194 [2024-11-26 18:27:27.173778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.194 [2024-11-26 18:27:27.173804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.194 [2024-11-26 18:27:27.173818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.194 [2024-11-26 18:27:27.173831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.194 [2024-11-26 18:27:27.173860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.194 qpair failed and we were unable to recover it. 00:31:39.194 [2024-11-26 18:27:27.183765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.194 [2024-11-26 18:27:27.183876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.194 [2024-11-26 18:27:27.183902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.194 [2024-11-26 18:27:27.183916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.194 [2024-11-26 18:27:27.183929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.194 [2024-11-26 18:27:27.183957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.194 qpair failed and we were unable to recover it. 00:31:39.194 [2024-11-26 18:27:27.193820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.194 [2024-11-26 18:27:27.193933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.194 [2024-11-26 18:27:27.193959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.194 [2024-11-26 18:27:27.193974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.194 [2024-11-26 18:27:27.193987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.194 [2024-11-26 18:27:27.194016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.194 qpair failed and we were unable to recover it. 00:31:39.453 [2024-11-26 18:27:27.203887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.453 [2024-11-26 18:27:27.203966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.453 [2024-11-26 18:27:27.204000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.453 [2024-11-26 18:27:27.204015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.453 [2024-11-26 18:27:27.204027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.453 [2024-11-26 18:27:27.204056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.453 qpair failed and we were unable to recover it. 00:31:39.453 [2024-11-26 18:27:27.213812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.453 [2024-11-26 18:27:27.213899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.453 [2024-11-26 18:27:27.213925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.453 [2024-11-26 18:27:27.213939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.453 [2024-11-26 18:27:27.213952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.453 [2024-11-26 18:27:27.213981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.453 qpair failed and we were unable to recover it. 00:31:39.453 [2024-11-26 18:27:27.223849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.453 [2024-11-26 18:27:27.223944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.453 [2024-11-26 18:27:27.223970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.453 [2024-11-26 18:27:27.223984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.453 [2024-11-26 18:27:27.223997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.453 [2024-11-26 18:27:27.224026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.453 qpair failed and we were unable to recover it. 00:31:39.453 [2024-11-26 18:27:27.233868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.453 [2024-11-26 18:27:27.233961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.453 [2024-11-26 18:27:27.233986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.453 [2024-11-26 18:27:27.234000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.453 [2024-11-26 18:27:27.234012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.453 [2024-11-26 18:27:27.234041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.453 qpair failed and we were unable to recover it. 00:31:39.453 [2024-11-26 18:27:27.243888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.453 [2024-11-26 18:27:27.243990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.453 [2024-11-26 18:27:27.244016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.453 [2024-11-26 18:27:27.244030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.453 [2024-11-26 18:27:27.244050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.453 [2024-11-26 18:27:27.244079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.453 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.253947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.254033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.254059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.254074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.254087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.254116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.263966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.264065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.264090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.264105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.264118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.264147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.274011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.274097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.274124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.274139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.274156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.274186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.284014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.284095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.284121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.284135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.284148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.284177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.294072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.294159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.294185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.294200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.294213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.294242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.304090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.304204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.304230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.304244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.304257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.304285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.314131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.314212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.314238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.314253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.314265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.314294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.324189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.324287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.324320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.324335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.324348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.324377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.334179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.334258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.334289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.334310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.334325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.334355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.344220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.344315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.344342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.344356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.344369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.344398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.354274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.354406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.354432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.354447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.354459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.354488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.364271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.364360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.364386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.364401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.364413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.364444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.374300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.454 [2024-11-26 18:27:27.374445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.454 [2024-11-26 18:27:27.374474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.454 [2024-11-26 18:27:27.374491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.454 [2024-11-26 18:27:27.374510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.454 [2024-11-26 18:27:27.374540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.454 qpair failed and we were unable to recover it. 00:31:39.454 [2024-11-26 18:27:27.384326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.455 [2024-11-26 18:27:27.384448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.455 [2024-11-26 18:27:27.384475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.455 [2024-11-26 18:27:27.384489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.455 [2024-11-26 18:27:27.384502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.455 [2024-11-26 18:27:27.384532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.455 qpair failed and we were unable to recover it. 00:31:39.455 [2024-11-26 18:27:27.394397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.455 [2024-11-26 18:27:27.394492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.455 [2024-11-26 18:27:27.394517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.455 [2024-11-26 18:27:27.394532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.455 [2024-11-26 18:27:27.394544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.455 [2024-11-26 18:27:27.394574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.455 qpair failed and we were unable to recover it. 00:31:39.455 [2024-11-26 18:27:27.404410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.455 [2024-11-26 18:27:27.404491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.455 [2024-11-26 18:27:27.404516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.455 [2024-11-26 18:27:27.404531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.455 [2024-11-26 18:27:27.404544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.455 [2024-11-26 18:27:27.404573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.455 qpair failed and we were unable to recover it. 00:31:39.455 [2024-11-26 18:27:27.414399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.455 [2024-11-26 18:27:27.414485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.455 [2024-11-26 18:27:27.414510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.455 [2024-11-26 18:27:27.414524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.455 [2024-11-26 18:27:27.414537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.455 [2024-11-26 18:27:27.414566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.455 qpair failed and we were unable to recover it. 00:31:39.455 [2024-11-26 18:27:27.424455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.455 [2024-11-26 18:27:27.424544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.455 [2024-11-26 18:27:27.424570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.455 [2024-11-26 18:27:27.424584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.455 [2024-11-26 18:27:27.424598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.455 [2024-11-26 18:27:27.424626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.455 qpair failed and we were unable to recover it. 00:31:39.455 [2024-11-26 18:27:27.434442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.455 [2024-11-26 18:27:27.434530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.455 [2024-11-26 18:27:27.434555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.455 [2024-11-26 18:27:27.434570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.455 [2024-11-26 18:27:27.434582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.455 [2024-11-26 18:27:27.434611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.455 qpair failed and we were unable to recover it. 00:31:39.455 [2024-11-26 18:27:27.444493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.455 [2024-11-26 18:27:27.444576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.455 [2024-11-26 18:27:27.444601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.455 [2024-11-26 18:27:27.444615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.455 [2024-11-26 18:27:27.444628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.455 [2024-11-26 18:27:27.444656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.455 qpair failed and we were unable to recover it. 00:31:39.455 [2024-11-26 18:27:27.454526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.455 [2024-11-26 18:27:27.454617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.455 [2024-11-26 18:27:27.454642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.455 [2024-11-26 18:27:27.454656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.455 [2024-11-26 18:27:27.454669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.455 [2024-11-26 18:27:27.454698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.455 qpair failed and we were unable to recover it. 00:31:39.714 [2024-11-26 18:27:27.464550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.714 [2024-11-26 18:27:27.464639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.714 [2024-11-26 18:27:27.464669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.714 [2024-11-26 18:27:27.464684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.714 [2024-11-26 18:27:27.464697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.714 [2024-11-26 18:27:27.464726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.714 qpair failed and we were unable to recover it. 00:31:39.714 [2024-11-26 18:27:27.474602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.714 [2024-11-26 18:27:27.474689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.714 [2024-11-26 18:27:27.474715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.714 [2024-11-26 18:27:27.474729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.714 [2024-11-26 18:27:27.474742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.714 [2024-11-26 18:27:27.474771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.714 qpair failed and we were unable to recover it. 00:31:39.714 [2024-11-26 18:27:27.484621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.714 [2024-11-26 18:27:27.484702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.714 [2024-11-26 18:27:27.484727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.714 [2024-11-26 18:27:27.484741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.714 [2024-11-26 18:27:27.484755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.714 [2024-11-26 18:27:27.484783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.714 qpair failed and we were unable to recover it. 00:31:39.714 [2024-11-26 18:27:27.494649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.714 [2024-11-26 18:27:27.494782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.714 [2024-11-26 18:27:27.494808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.714 [2024-11-26 18:27:27.494823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.714 [2024-11-26 18:27:27.494836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.494864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.504672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.504761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.504787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.504801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.504819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.504849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.514672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.514772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.514797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.514812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.514825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.514854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.524723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.524856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.524881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.524895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.524908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.524937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.534738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.534865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.534890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.534905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.534918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.534947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.544769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.544854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.544881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.544895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.544908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.544936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.554916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.555002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.555031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.555048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.555061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.555090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.564825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.564907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.564933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.564948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.564960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.564989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.574861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.574950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.574976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.574991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.575003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.575032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.584891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.584982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.585008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.585022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.585036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.585064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.594903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.595031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.595061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.595077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.595090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.595118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.604952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.605035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.605060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.605074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.605088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.605117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.614964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.615046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.615072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.615087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.615100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.615128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.715 qpair failed and we were unable to recover it. 00:31:39.715 [2024-11-26 18:27:27.624996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.715 [2024-11-26 18:27:27.625087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.715 [2024-11-26 18:27:27.625113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.715 [2024-11-26 18:27:27.625126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.715 [2024-11-26 18:27:27.625140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.715 [2024-11-26 18:27:27.625168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.635016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.635100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.635126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.635140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.635159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.635188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.645058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.645142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.645167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.645182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.645195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.645223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.655099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.655190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.655218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.655235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.655248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.655278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.665116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.665207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.665232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.665247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.665260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.665288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.675150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.675233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.675260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.675279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.675293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.675332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.685271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.685365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.685392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.685407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.685419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.685449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.695229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.695358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.695384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.695398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.695410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.695440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.705267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.705367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.705393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.705407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.705420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.705450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.716 [2024-11-26 18:27:27.715278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.716 [2024-11-26 18:27:27.715369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.716 [2024-11-26 18:27:27.715395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.716 [2024-11-26 18:27:27.715409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.716 [2024-11-26 18:27:27.715422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.716 [2024-11-26 18:27:27.715451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.716 qpair failed and we were unable to recover it. 00:31:39.975 [2024-11-26 18:27:27.725324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.975 [2024-11-26 18:27:27.725408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.975 [2024-11-26 18:27:27.725439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.975 [2024-11-26 18:27:27.725454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.975 [2024-11-26 18:27:27.725467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.975 [2024-11-26 18:27:27.725496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.975 qpair failed and we were unable to recover it. 00:31:39.975 [2024-11-26 18:27:27.735318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.975 [2024-11-26 18:27:27.735403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.975 [2024-11-26 18:27:27.735429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.975 [2024-11-26 18:27:27.735443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.975 [2024-11-26 18:27:27.735456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.975 [2024-11-26 18:27:27.735487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.975 qpair failed and we were unable to recover it. 00:31:39.975 [2024-11-26 18:27:27.745343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.975 [2024-11-26 18:27:27.745476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.975 [2024-11-26 18:27:27.745501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.975 [2024-11-26 18:27:27.745515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.975 [2024-11-26 18:27:27.745528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.975 [2024-11-26 18:27:27.745557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.975 qpair failed and we were unable to recover it. 00:31:39.975 [2024-11-26 18:27:27.755365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.975 [2024-11-26 18:27:27.755454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.975 [2024-11-26 18:27:27.755479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.975 [2024-11-26 18:27:27.755494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.975 [2024-11-26 18:27:27.755507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.755536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.765393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.765484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.765509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.765524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.765542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.765571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.775529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.775615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.775642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.775656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.775669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.775699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.785453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.785547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.785572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.785587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.785600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.785629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.795506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.795592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.795617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.795632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.795645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.795673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.805528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.805615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.805641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.805655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.805667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.805696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.815526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.815617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.815643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.815658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.815671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.815700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.825614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.825750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.825776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.825790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.825803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.825832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.835590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.835673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.835699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.835713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.835726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.835754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.845619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.845712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.845737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.845751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.845764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.845793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.855686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.855770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.855803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.855818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.855831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.855860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.865710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.865800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.865824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.865838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.865851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.865880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.875691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.875779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.875805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.875819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.875831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.875860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.885730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.885814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.885839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.885853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.885866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.885895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.895757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.895874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.895900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.895915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.895933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.895962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.905811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.905912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.905937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.905953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.905966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.905995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.915802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.915932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.915958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.915973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.915985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.916014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.925877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.925961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.925986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.926000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.926013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.926042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.935848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.935930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.935955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.935970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.935982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.936011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.945898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.945987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.946013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.946028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.946043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.946071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.955927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.956059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.956085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.956100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.956112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.956141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.976 [2024-11-26 18:27:27.965986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.976 [2024-11-26 18:27:27.966105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.976 [2024-11-26 18:27:27.966130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.976 [2024-11-26 18:27:27.966145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.976 [2024-11-26 18:27:27.966158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.976 [2024-11-26 18:27:27.966187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.976 qpair failed and we were unable to recover it. 00:31:39.977 [2024-11-26 18:27:27.975990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:39.977 [2024-11-26 18:27:27.976072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:39.977 [2024-11-26 18:27:27.976098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:39.977 [2024-11-26 18:27:27.976112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:39.977 [2024-11-26 18:27:27.976125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:39.977 [2024-11-26 18:27:27.976153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:39.977 qpair failed and we were unable to recover it. 00:31:40.235 [2024-11-26 18:27:27.986019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.235 [2024-11-26 18:27:27.986107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.235 [2024-11-26 18:27:27.986137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.235 [2024-11-26 18:27:27.986152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.235 [2024-11-26 18:27:27.986166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.235 [2024-11-26 18:27:27.986194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.235 qpair failed and we were unable to recover it. 00:31:40.235 [2024-11-26 18:27:27.996037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.235 [2024-11-26 18:27:27.996165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.235 [2024-11-26 18:27:27.996191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.235 [2024-11-26 18:27:27.996205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.235 [2024-11-26 18:27:27.996218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.235 [2024-11-26 18:27:27.996248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.235 qpair failed and we were unable to recover it. 00:31:40.235 [2024-11-26 18:27:28.006078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.006190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.006215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.006230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.006243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.006272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.016093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.016179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.016205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.016219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.016232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.016261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.026158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.026271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.026295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.026319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.026338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.026367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.036134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.036229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.036255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.036270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.036282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.036318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.046190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.046280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.046317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.046341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.046359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.046394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.056279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.056373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.056399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.056414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.056427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.056457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.066277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.066408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.066434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.066448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.066461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.066490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.076257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.076371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.076396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.076411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.076424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.076453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.086294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.086391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.086416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.086430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.086443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.086471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.096364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.096483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.096511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.096525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.096539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.096569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.106416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.106530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.106556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.106570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.106583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.106613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.116432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.116529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.116560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.116575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.116588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.116617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.126414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.236 [2024-11-26 18:27:28.126501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.236 [2024-11-26 18:27:28.126527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.236 [2024-11-26 18:27:28.126542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.236 [2024-11-26 18:27:28.126554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.236 [2024-11-26 18:27:28.126583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.236 qpair failed and we were unable to recover it. 00:31:40.236 [2024-11-26 18:27:28.136443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.136522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.136548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.136563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.136575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.136604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.146538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.146663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.146688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.146702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.146715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.146743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.156512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.156596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.156621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.156636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.156653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.156683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.166544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.166633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.166659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.166673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.166685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.166714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.176549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.176635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.176661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.176675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.176688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.176718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.186611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.186716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.186742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.186756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.186769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.186797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.196706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.196791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.196816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.196830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.196843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.196872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.206654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.206780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.206806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.206821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.206834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.206862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.216709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.216813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.216838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.216852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.216865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.216894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.226758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.226871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.226896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.226911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.226924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.226953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.237 [2024-11-26 18:27:28.236754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.237 [2024-11-26 18:27:28.236835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.237 [2024-11-26 18:27:28.236860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.237 [2024-11-26 18:27:28.236875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.237 [2024-11-26 18:27:28.236887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.237 [2024-11-26 18:27:28.236915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.237 qpair failed and we were unable to recover it. 00:31:40.497 [2024-11-26 18:27:28.246756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.497 [2024-11-26 18:27:28.246838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.497 [2024-11-26 18:27:28.246868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.497 [2024-11-26 18:27:28.246885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.497 [2024-11-26 18:27:28.246898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.497 [2024-11-26 18:27:28.246927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.497 qpair failed and we were unable to recover it. 00:31:40.497 [2024-11-26 18:27:28.256775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.497 [2024-11-26 18:27:28.256859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.497 [2024-11-26 18:27:28.256885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.497 [2024-11-26 18:27:28.256899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.497 [2024-11-26 18:27:28.256912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.497 [2024-11-26 18:27:28.256941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.497 qpair failed and we were unable to recover it. 00:31:40.497 [2024-11-26 18:27:28.266834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.497 [2024-11-26 18:27:28.266949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.497 [2024-11-26 18:27:28.266974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.497 [2024-11-26 18:27:28.266988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.497 [2024-11-26 18:27:28.267001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.497 [2024-11-26 18:27:28.267030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.497 qpair failed and we were unable to recover it. 00:31:40.497 [2024-11-26 18:27:28.276896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.497 [2024-11-26 18:27:28.277015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.497 [2024-11-26 18:27:28.277040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.497 [2024-11-26 18:27:28.277054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.497 [2024-11-26 18:27:28.277067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.497 [2024-11-26 18:27:28.277096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.497 qpair failed and we were unable to recover it. 00:31:40.497 [2024-11-26 18:27:28.286912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.497 [2024-11-26 18:27:28.286998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.497 [2024-11-26 18:27:28.287024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.497 [2024-11-26 18:27:28.287039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.497 [2024-11-26 18:27:28.287052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.497 [2024-11-26 18:27:28.287088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.497 qpair failed and we were unable to recover it. 00:31:40.497 [2024-11-26 18:27:28.296949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.497 [2024-11-26 18:27:28.297033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.497 [2024-11-26 18:27:28.297059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.497 [2024-11-26 18:27:28.297073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.497 [2024-11-26 18:27:28.297086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.497 [2024-11-26 18:27:28.297115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.497 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.306991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.307087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.307113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.307127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.307140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.307168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.316997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.317082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.317109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.317126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.317139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.317170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.327014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.327146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.327172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.327187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.327200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.327229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.337005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.337088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.337114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.337129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.337142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.337171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.347054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.347144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.347169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.347184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.347197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.347225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.357078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.357164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.357190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.357205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.357218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.357247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.367097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.367178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.367204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.367219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.367232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.367261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.377141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.377225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.377256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.377271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.377284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.377320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.387210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.387332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.387358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.387372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.387385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.387414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.397230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.397335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.397361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.397376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.397389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.397417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.407231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.407326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.407352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.407366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.407379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.407408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.417227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.417328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.417354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.417369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.417381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.417415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.498 [2024-11-26 18:27:28.427367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.498 [2024-11-26 18:27:28.427458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.498 [2024-11-26 18:27:28.427484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.498 [2024-11-26 18:27:28.427498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.498 [2024-11-26 18:27:28.427511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.498 [2024-11-26 18:27:28.427540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.498 qpair failed and we were unable to recover it. 00:31:40.499 [2024-11-26 18:27:28.437299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.499 [2024-11-26 18:27:28.437399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.499 [2024-11-26 18:27:28.437425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.499 [2024-11-26 18:27:28.437439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.499 [2024-11-26 18:27:28.437453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.499 [2024-11-26 18:27:28.437486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.499 qpair failed and we were unable to recover it. 00:31:40.499 [2024-11-26 18:27:28.447345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.499 [2024-11-26 18:27:28.447431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.499 [2024-11-26 18:27:28.447458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.499 [2024-11-26 18:27:28.447472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.499 [2024-11-26 18:27:28.447485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.499 [2024-11-26 18:27:28.447514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.499 qpair failed and we were unable to recover it. 00:31:40.499 [2024-11-26 18:27:28.457341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.499 [2024-11-26 18:27:28.457445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.499 [2024-11-26 18:27:28.457470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.499 [2024-11-26 18:27:28.457485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.499 [2024-11-26 18:27:28.457498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.499 [2024-11-26 18:27:28.457526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.499 qpair failed and we were unable to recover it. 00:31:40.499 [2024-11-26 18:27:28.467420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.499 [2024-11-26 18:27:28.467507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.499 [2024-11-26 18:27:28.467532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.499 [2024-11-26 18:27:28.467546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.499 [2024-11-26 18:27:28.467560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.499 [2024-11-26 18:27:28.467589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.499 qpair failed and we were unable to recover it. 00:31:40.499 [2024-11-26 18:27:28.477398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.499 [2024-11-26 18:27:28.477489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.499 [2024-11-26 18:27:28.477514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.499 [2024-11-26 18:27:28.477528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.499 [2024-11-26 18:27:28.477541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.499 [2024-11-26 18:27:28.477569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.499 qpair failed and we were unable to recover it. 00:31:40.499 [2024-11-26 18:27:28.487435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.499 [2024-11-26 18:27:28.487521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.499 [2024-11-26 18:27:28.487546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.499 [2024-11-26 18:27:28.487560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.499 [2024-11-26 18:27:28.487573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.499 [2024-11-26 18:27:28.487601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.499 qpair failed and we were unable to recover it. 00:31:40.499 [2024-11-26 18:27:28.497475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.499 [2024-11-26 18:27:28.497568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.499 [2024-11-26 18:27:28.497594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.499 [2024-11-26 18:27:28.497608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.499 [2024-11-26 18:27:28.497621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.499 [2024-11-26 18:27:28.497649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.499 qpair failed and we were unable to recover it. 00:31:40.758 [2024-11-26 18:27:28.507495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.758 [2024-11-26 18:27:28.507591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.758 [2024-11-26 18:27:28.507624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.758 [2024-11-26 18:27:28.507639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.758 [2024-11-26 18:27:28.507652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.758 [2024-11-26 18:27:28.507681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.758 qpair failed and we were unable to recover it. 00:31:40.758 [2024-11-26 18:27:28.517509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.758 [2024-11-26 18:27:28.517590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.758 [2024-11-26 18:27:28.517615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.758 [2024-11-26 18:27:28.517630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.758 [2024-11-26 18:27:28.517642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.758 [2024-11-26 18:27:28.517670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.758 qpair failed and we were unable to recover it. 00:31:40.758 [2024-11-26 18:27:28.527540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.758 [2024-11-26 18:27:28.527638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.758 [2024-11-26 18:27:28.527663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.758 [2024-11-26 18:27:28.527677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.758 [2024-11-26 18:27:28.527690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.758 [2024-11-26 18:27:28.527719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.758 qpair failed and we were unable to recover it. 00:31:40.758 [2024-11-26 18:27:28.537603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.758 [2024-11-26 18:27:28.537688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.758 [2024-11-26 18:27:28.537713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.758 [2024-11-26 18:27:28.537728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.758 [2024-11-26 18:27:28.537741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.758 [2024-11-26 18:27:28.537770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.758 qpair failed and we were unable to recover it. 00:31:40.758 [2024-11-26 18:27:28.547651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.758 [2024-11-26 18:27:28.547739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.758 [2024-11-26 18:27:28.547764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.758 [2024-11-26 18:27:28.547779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.758 [2024-11-26 18:27:28.547792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.758 [2024-11-26 18:27:28.547827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.758 qpair failed and we were unable to recover it. 00:31:40.758 [2024-11-26 18:27:28.557656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.758 [2024-11-26 18:27:28.557745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.758 [2024-11-26 18:27:28.557770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.758 [2024-11-26 18:27:28.557784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.758 [2024-11-26 18:27:28.557797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.758 [2024-11-26 18:27:28.557826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.758 qpair failed and we were unable to recover it. 00:31:40.758 [2024-11-26 18:27:28.567699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.758 [2024-11-26 18:27:28.567780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.758 [2024-11-26 18:27:28.567805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.758 [2024-11-26 18:27:28.567819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.758 [2024-11-26 18:27:28.567832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.567861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.577691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.577813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.577839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.577854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.577866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.577895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.587827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.587918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.587943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.587957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.587970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.587999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.597768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.597852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.597878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.597892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.597905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.597933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.607760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.607862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.607887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.607902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.607914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.607942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.617872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.617953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.617978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.617992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.618005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.618033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.627906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.628000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.628025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.628039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.628051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.628081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.637851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.637936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.637966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.637982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.637994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.638022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.647927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.648042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.648068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.648083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.648096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.648125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.658016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.658102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.658128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.658142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.658155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.658184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.667945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.668073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.668099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.668114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.668127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.668157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.677974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.678066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.678092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.678106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.678119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.678153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.687985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.688077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.688102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.688116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.688129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.759 [2024-11-26 18:27:28.688159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.759 qpair failed and we were unable to recover it. 00:31:40.759 [2024-11-26 18:27:28.698034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.759 [2024-11-26 18:27:28.698117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.759 [2024-11-26 18:27:28.698143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.759 [2024-11-26 18:27:28.698157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.759 [2024-11-26 18:27:28.698170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.760 [2024-11-26 18:27:28.698199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.760 qpair failed and we were unable to recover it. 00:31:40.760 [2024-11-26 18:27:28.708058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.760 [2024-11-26 18:27:28.708165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.760 [2024-11-26 18:27:28.708190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.760 [2024-11-26 18:27:28.708204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.760 [2024-11-26 18:27:28.708217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.760 [2024-11-26 18:27:28.708246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.760 qpair failed and we were unable to recover it. 00:31:40.760 [2024-11-26 18:27:28.718174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.760 [2024-11-26 18:27:28.718257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.760 [2024-11-26 18:27:28.718283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.760 [2024-11-26 18:27:28.718298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.760 [2024-11-26 18:27:28.718317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.760 [2024-11-26 18:27:28.718346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.760 qpair failed and we were unable to recover it. 00:31:40.760 [2024-11-26 18:27:28.728123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.760 [2024-11-26 18:27:28.728209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.760 [2024-11-26 18:27:28.728237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.760 [2024-11-26 18:27:28.728254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.760 [2024-11-26 18:27:28.728267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.760 [2024-11-26 18:27:28.728296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.760 qpair failed and we were unable to recover it. 00:31:40.760 [2024-11-26 18:27:28.738145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.760 [2024-11-26 18:27:28.738235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.760 [2024-11-26 18:27:28.738261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.760 [2024-11-26 18:27:28.738275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.760 [2024-11-26 18:27:28.738288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.760 [2024-11-26 18:27:28.738323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.760 qpair failed and we were unable to recover it. 00:31:40.760 [2024-11-26 18:27:28.748182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.760 [2024-11-26 18:27:28.748268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.760 [2024-11-26 18:27:28.748294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.760 [2024-11-26 18:27:28.748317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.760 [2024-11-26 18:27:28.748331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.760 [2024-11-26 18:27:28.748363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.760 qpair failed and we were unable to recover it. 00:31:40.760 [2024-11-26 18:27:28.758214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:40.760 [2024-11-26 18:27:28.758300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:40.760 [2024-11-26 18:27:28.758332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:40.760 [2024-11-26 18:27:28.758346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:40.760 [2024-11-26 18:27:28.758359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:40.760 [2024-11-26 18:27:28.758388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:40.760 qpair failed and we were unable to recover it. 00:31:41.019 [2024-11-26 18:27:28.768222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.019 [2024-11-26 18:27:28.768313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.019 [2024-11-26 18:27:28.768344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.019 [2024-11-26 18:27:28.768359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.019 [2024-11-26 18:27:28.768372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.019 [2024-11-26 18:27:28.768401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.019 qpair failed and we were unable to recover it. 00:31:41.019 [2024-11-26 18:27:28.778245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.019 [2024-11-26 18:27:28.778337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.019 [2024-11-26 18:27:28.778363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.019 [2024-11-26 18:27:28.778377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.019 [2024-11-26 18:27:28.778390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.019 [2024-11-26 18:27:28.778419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.019 qpair failed and we were unable to recover it. 00:31:41.019 [2024-11-26 18:27:28.788395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.019 [2024-11-26 18:27:28.788499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.019 [2024-11-26 18:27:28.788525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.019 [2024-11-26 18:27:28.788540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.019 [2024-11-26 18:27:28.788552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.019 [2024-11-26 18:27:28.788581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.019 qpair failed and we were unable to recover it. 00:31:41.019 [2024-11-26 18:27:28.798310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.019 [2024-11-26 18:27:28.798400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.019 [2024-11-26 18:27:28.798425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.019 [2024-11-26 18:27:28.798440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.019 [2024-11-26 18:27:28.798453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.019 [2024-11-26 18:27:28.798482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.019 qpair failed and we were unable to recover it. 00:31:41.019 [2024-11-26 18:27:28.808350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.019 [2024-11-26 18:27:28.808430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.019 [2024-11-26 18:27:28.808455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.019 [2024-11-26 18:27:28.808470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.019 [2024-11-26 18:27:28.808483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.019 [2024-11-26 18:27:28.808517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.019 qpair failed and we were unable to recover it. 00:31:41.019 [2024-11-26 18:27:28.818369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.019 [2024-11-26 18:27:28.818461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.019 [2024-11-26 18:27:28.818486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.019 [2024-11-26 18:27:28.818501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.019 [2024-11-26 18:27:28.818514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.019 [2024-11-26 18:27:28.818542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.019 qpair failed and we were unable to recover it. 00:31:41.019 [2024-11-26 18:27:28.828444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.019 [2024-11-26 18:27:28.828545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.019 [2024-11-26 18:27:28.828570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.019 [2024-11-26 18:27:28.828584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.019 [2024-11-26 18:27:28.828597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.019 [2024-11-26 18:27:28.828627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.019 qpair failed and we were unable to recover it. 00:31:41.019 [2024-11-26 18:27:28.838426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.019 [2024-11-26 18:27:28.838516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.019 [2024-11-26 18:27:28.838542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.838557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.838570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.838598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.848494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.848579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.848604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.848619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.848632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.848660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.858519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.858647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.858672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.858687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.858699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.858728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.868547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.868635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.868661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.868675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.868689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.868717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.878533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.878620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.878646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.878660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.878673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.878702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.888545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.888631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.888656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.888670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.888683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.888712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.898581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.898664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.898695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.898709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.898722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.898751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.908609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.908700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.908726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.908740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.908753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.908782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.918634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.918730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.918755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.918769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.918782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.918810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.928659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.928742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.928768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.928782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.928795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.928823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.938706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.938794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.938820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.938835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.938847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.938882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.948778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.948897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.948922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.948936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.948949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.948977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.958767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.958853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.958878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.958892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.020 [2024-11-26 18:27:28.958904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.020 [2024-11-26 18:27:28.958933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.020 qpair failed and we were unable to recover it. 00:31:41.020 [2024-11-26 18:27:28.968836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.020 [2024-11-26 18:27:28.968916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.020 [2024-11-26 18:27:28.968942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.020 [2024-11-26 18:27:28.968956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.021 [2024-11-26 18:27:28.968969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.021 [2024-11-26 18:27:28.968997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.021 qpair failed and we were unable to recover it. 00:31:41.021 [2024-11-26 18:27:28.978837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.021 [2024-11-26 18:27:28.978928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.021 [2024-11-26 18:27:28.978956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.021 [2024-11-26 18:27:28.978972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.021 [2024-11-26 18:27:28.978985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.021 [2024-11-26 18:27:28.979015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.021 qpair failed and we were unable to recover it. 00:31:41.021 [2024-11-26 18:27:28.988885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.021 [2024-11-26 18:27:28.988984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.021 [2024-11-26 18:27:28.989010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.021 [2024-11-26 18:27:28.989024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.021 [2024-11-26 18:27:28.989037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.021 [2024-11-26 18:27:28.989066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.021 qpair failed and we were unable to recover it. 00:31:41.021 [2024-11-26 18:27:28.998869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.021 [2024-11-26 18:27:28.998965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.021 [2024-11-26 18:27:28.998990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.021 [2024-11-26 18:27:28.999005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.021 [2024-11-26 18:27:28.999018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.021 [2024-11-26 18:27:28.999047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.021 qpair failed and we were unable to recover it. 00:31:41.021 [2024-11-26 18:27:29.008904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.021 [2024-11-26 18:27:29.008988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.021 [2024-11-26 18:27:29.009014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.021 [2024-11-26 18:27:29.009029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.021 [2024-11-26 18:27:29.009041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.021 [2024-11-26 18:27:29.009070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.021 qpair failed and we were unable to recover it. 00:31:41.021 [2024-11-26 18:27:29.018928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.021 [2024-11-26 18:27:29.019011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.021 [2024-11-26 18:27:29.019037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.021 [2024-11-26 18:27:29.019051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.021 [2024-11-26 18:27:29.019064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.021 [2024-11-26 18:27:29.019093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.021 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.029002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.029092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.029121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.029136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.283 [2024-11-26 18:27:29.029148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.283 [2024-11-26 18:27:29.029176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.038991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.039082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.039109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.039130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.283 [2024-11-26 18:27:29.039143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.283 [2024-11-26 18:27:29.039174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.049117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.049208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.049234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.049249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.283 [2024-11-26 18:27:29.049263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.283 [2024-11-26 18:27:29.049292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.059047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.059158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.059184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.059198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.283 [2024-11-26 18:27:29.059212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.283 [2024-11-26 18:27:29.059241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.069081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.069169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.069194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.069208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.283 [2024-11-26 18:27:29.069222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.283 [2024-11-26 18:27:29.069256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.079192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.079322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.079348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.079363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.283 [2024-11-26 18:27:29.079376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.283 [2024-11-26 18:27:29.079404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.089215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.089308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.089341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.089360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.283 [2024-11-26 18:27:29.089372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.283 [2024-11-26 18:27:29.089402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.099154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.099243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.099269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.099284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.283 [2024-11-26 18:27:29.099297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.283 [2024-11-26 18:27:29.099336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.283 qpair failed and we were unable to recover it. 00:31:41.283 [2024-11-26 18:27:29.109188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.283 [2024-11-26 18:27:29.109280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.283 [2024-11-26 18:27:29.109312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.283 [2024-11-26 18:27:29.109329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.109342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.109371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.119360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.119479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.119505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.119520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.119532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.119561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.129241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.129331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.129361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.129377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.129391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.129421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.139283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.139385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.139412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.139427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.139441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.139471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.149319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.149409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.149434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.149450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.149463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.149493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.159345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.159431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.159464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.159479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.159492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.159521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.169383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.169494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.169523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.169539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.169552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.169582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.179402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.179489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.179515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.179529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.179542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.179571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.189461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.189601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.189627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.189641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.189653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.189682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.199465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.199583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.199610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.284 [2024-11-26 18:27:29.199624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.284 [2024-11-26 18:27:29.199637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.284 [2024-11-26 18:27:29.199671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.284 qpair failed and we were unable to recover it. 00:31:41.284 [2024-11-26 18:27:29.209502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.284 [2024-11-26 18:27:29.209595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.284 [2024-11-26 18:27:29.209621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.209635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.209648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.209676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-11-26 18:27:29.219510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.285 [2024-11-26 18:27:29.219599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.285 [2024-11-26 18:27:29.219624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.219639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.219651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.219680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-11-26 18:27:29.229548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.285 [2024-11-26 18:27:29.229634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.285 [2024-11-26 18:27:29.229660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.229675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.229688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.229716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-11-26 18:27:29.239601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.285 [2024-11-26 18:27:29.239725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.285 [2024-11-26 18:27:29.239751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.239765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.239778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.239806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-11-26 18:27:29.249603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.285 [2024-11-26 18:27:29.249721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.285 [2024-11-26 18:27:29.249747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.249762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.249775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.249806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-11-26 18:27:29.259615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.285 [2024-11-26 18:27:29.259700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.285 [2024-11-26 18:27:29.259726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.259740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.259753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.259782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-11-26 18:27:29.269671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.285 [2024-11-26 18:27:29.269765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.285 [2024-11-26 18:27:29.269790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.269805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.269817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.269846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-11-26 18:27:29.279695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.285 [2024-11-26 18:27:29.279822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.285 [2024-11-26 18:27:29.279848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.279862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.279875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.279904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.285 [2024-11-26 18:27:29.289745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.285 [2024-11-26 18:27:29.289833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.285 [2024-11-26 18:27:29.289867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.285 [2024-11-26 18:27:29.289884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.285 [2024-11-26 18:27:29.289897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.285 [2024-11-26 18:27:29.289926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.285 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.299751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.299842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.546 [2024-11-26 18:27:29.299869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.546 [2024-11-26 18:27:29.299883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.546 [2024-11-26 18:27:29.299896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.546 [2024-11-26 18:27:29.299925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.546 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.309780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.309876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.546 [2024-11-26 18:27:29.309901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.546 [2024-11-26 18:27:29.309915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.546 [2024-11-26 18:27:29.309928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.546 [2024-11-26 18:27:29.309957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.546 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.319832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.319921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.546 [2024-11-26 18:27:29.319949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.546 [2024-11-26 18:27:29.319965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.546 [2024-11-26 18:27:29.319977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.546 [2024-11-26 18:27:29.320007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.546 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.329835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.329919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.546 [2024-11-26 18:27:29.329944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.546 [2024-11-26 18:27:29.329958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.546 [2024-11-26 18:27:29.329971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.546 [2024-11-26 18:27:29.330005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.546 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.339885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.339997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.546 [2024-11-26 18:27:29.340025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.546 [2024-11-26 18:27:29.340042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.546 [2024-11-26 18:27:29.340055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.546 [2024-11-26 18:27:29.340084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.546 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.349949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.350046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.546 [2024-11-26 18:27:29.350071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.546 [2024-11-26 18:27:29.350085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.546 [2024-11-26 18:27:29.350098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.546 [2024-11-26 18:27:29.350127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.546 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.359945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.360028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.546 [2024-11-26 18:27:29.360054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.546 [2024-11-26 18:27:29.360068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.546 [2024-11-26 18:27:29.360081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.546 [2024-11-26 18:27:29.360110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.546 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.370026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.370122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.546 [2024-11-26 18:27:29.370148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.546 [2024-11-26 18:27:29.370162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.546 [2024-11-26 18:27:29.370175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.546 [2024-11-26 18:27:29.370204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.546 qpair failed and we were unable to recover it. 00:31:41.546 [2024-11-26 18:27:29.379959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.546 [2024-11-26 18:27:29.380071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.380097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.380111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.380124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.380154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.390038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.390129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.390155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.390169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.390183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.390211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.400031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.400136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.400162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.400176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.400189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.400218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.410067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.410151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.410176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.410191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.410204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.410232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.420093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.420182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.420213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.420229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.420242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.420270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.430114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.430206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.430232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.430246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.430259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.430288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.440142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.440262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.440287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.440308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.440323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.440352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.450177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.450265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.450294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.450323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.450339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.450371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.460190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.460283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.460317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.460334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.460347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.460381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.470270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.470382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.470407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.470422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.470435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.470464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.480278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.480390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.480416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.480431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.480444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.480472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.490274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.547 [2024-11-26 18:27:29.490365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.547 [2024-11-26 18:27:29.490391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.547 [2024-11-26 18:27:29.490405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.547 [2024-11-26 18:27:29.490418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.547 [2024-11-26 18:27:29.490447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.547 qpair failed and we were unable to recover it. 00:31:41.547 [2024-11-26 18:27:29.500364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.548 [2024-11-26 18:27:29.500461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.548 [2024-11-26 18:27:29.500487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.548 [2024-11-26 18:27:29.500501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.548 [2024-11-26 18:27:29.500513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.548 [2024-11-26 18:27:29.500543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.548 qpair failed and we were unable to recover it. 00:31:41.548 [2024-11-26 18:27:29.510357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.548 [2024-11-26 18:27:29.510447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.548 [2024-11-26 18:27:29.510473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.548 [2024-11-26 18:27:29.510487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.548 [2024-11-26 18:27:29.510500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.548 [2024-11-26 18:27:29.510529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.548 qpair failed and we were unable to recover it. 00:31:41.548 [2024-11-26 18:27:29.520376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.548 [2024-11-26 18:27:29.520470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.548 [2024-11-26 18:27:29.520496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.548 [2024-11-26 18:27:29.520511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.548 [2024-11-26 18:27:29.520528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.548 [2024-11-26 18:27:29.520558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.548 qpair failed and we were unable to recover it. 00:31:41.548 [2024-11-26 18:27:29.530412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.548 [2024-11-26 18:27:29.530512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.548 [2024-11-26 18:27:29.530538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.548 [2024-11-26 18:27:29.530552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.548 [2024-11-26 18:27:29.530564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.548 [2024-11-26 18:27:29.530593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.548 qpair failed and we were unable to recover it. 00:31:41.548 [2024-11-26 18:27:29.540462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.548 [2024-11-26 18:27:29.540548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.548 [2024-11-26 18:27:29.540574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.548 [2024-11-26 18:27:29.540589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.548 [2024-11-26 18:27:29.540602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.548 [2024-11-26 18:27:29.540631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.548 qpair failed and we were unable to recover it. 00:31:41.548 [2024-11-26 18:27:29.550471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.548 [2024-11-26 18:27:29.550562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.548 [2024-11-26 18:27:29.550593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.548 [2024-11-26 18:27:29.550608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.548 [2024-11-26 18:27:29.550621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.548 [2024-11-26 18:27:29.550652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.548 qpair failed and we were unable to recover it. 00:31:41.806 [2024-11-26 18:27:29.560474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.806 [2024-11-26 18:27:29.560591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.806 [2024-11-26 18:27:29.560616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.806 [2024-11-26 18:27:29.560631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.806 [2024-11-26 18:27:29.560643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.806 [2024-11-26 18:27:29.560672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.806 qpair failed and we were unable to recover it. 00:31:41.806 [2024-11-26 18:27:29.570549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.806 [2024-11-26 18:27:29.570658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.806 [2024-11-26 18:27:29.570684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.806 [2024-11-26 18:27:29.570698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.806 [2024-11-26 18:27:29.570711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.806 [2024-11-26 18:27:29.570741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.806 qpair failed and we were unable to recover it. 00:31:41.806 [2024-11-26 18:27:29.580548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.806 [2024-11-26 18:27:29.580632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.806 [2024-11-26 18:27:29.580658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.806 [2024-11-26 18:27:29.580673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.806 [2024-11-26 18:27:29.580686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.806 [2024-11-26 18:27:29.580717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.806 qpair failed and we were unable to recover it. 00:31:41.806 [2024-11-26 18:27:29.590592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.806 [2024-11-26 18:27:29.590684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.806 [2024-11-26 18:27:29.590709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.806 [2024-11-26 18:27:29.590723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.806 [2024-11-26 18:27:29.590736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.806 [2024-11-26 18:27:29.590771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.806 qpair failed and we were unable to recover it. 00:31:41.806 [2024-11-26 18:27:29.600591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.806 [2024-11-26 18:27:29.600677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.806 [2024-11-26 18:27:29.600703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.806 [2024-11-26 18:27:29.600717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.806 [2024-11-26 18:27:29.600731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.806 [2024-11-26 18:27:29.600759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.806 qpair failed and we were unable to recover it. 00:31:41.806 [2024-11-26 18:27:29.610642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.806 [2024-11-26 18:27:29.610719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.806 [2024-11-26 18:27:29.610744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.806 [2024-11-26 18:27:29.610759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.806 [2024-11-26 18:27:29.610772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.806 [2024-11-26 18:27:29.610800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.806 qpair failed and we were unable to recover it. 00:31:41.806 [2024-11-26 18:27:29.620727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.806 [2024-11-26 18:27:29.620855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.620880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.620894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.620907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.620935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.630712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.630800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.630825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.630839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.630852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.630880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.640711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.640796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.640821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.640836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.640848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.640876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.650776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.650883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.650908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.650922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.650935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.650964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.660760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.660889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.660916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.660930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.660943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.660972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.670808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.670895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.670921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.670935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.670948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.670976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.680832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.680932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.680962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.680977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.680990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.681020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.690898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.690981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.691007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.691022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.691034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.691063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.700891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.701009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.701035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.701049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.701062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.701091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.711035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.711161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.711186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.711200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.711212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.711242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.721026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.721108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.721133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.721147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.721160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.721195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.730992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.731105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.731131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.731146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.731158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.731187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.741039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.741167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.741192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.807 [2024-11-26 18:27:29.741206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.807 [2024-11-26 18:27:29.741219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.807 [2024-11-26 18:27:29.741248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.807 qpair failed and we were unable to recover it. 00:31:41.807 [2024-11-26 18:27:29.751029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.807 [2024-11-26 18:27:29.751119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.807 [2024-11-26 18:27:29.751144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.808 [2024-11-26 18:27:29.751158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.808 [2024-11-26 18:27:29.751172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.808 [2024-11-26 18:27:29.751200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.808 qpair failed and we were unable to recover it. 00:31:41.808 [2024-11-26 18:27:29.761063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.808 [2024-11-26 18:27:29.761161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.808 [2024-11-26 18:27:29.761187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.808 [2024-11-26 18:27:29.761201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.808 [2024-11-26 18:27:29.761214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.808 [2024-11-26 18:27:29.761243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.808 qpair failed and we were unable to recover it. 00:31:41.808 [2024-11-26 18:27:29.771092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.808 [2024-11-26 18:27:29.771178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.808 [2024-11-26 18:27:29.771204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.808 [2024-11-26 18:27:29.771219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.808 [2024-11-26 18:27:29.771232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.808 [2024-11-26 18:27:29.771260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.808 qpair failed and we were unable to recover it. 00:31:41.808 [2024-11-26 18:27:29.781126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.808 [2024-11-26 18:27:29.781211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.808 [2024-11-26 18:27:29.781239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.808 [2024-11-26 18:27:29.781257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.808 [2024-11-26 18:27:29.781271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.808 [2024-11-26 18:27:29.781301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.808 qpair failed and we were unable to recover it. 00:31:41.808 [2024-11-26 18:27:29.791153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.808 [2024-11-26 18:27:29.791242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.808 [2024-11-26 18:27:29.791268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.808 [2024-11-26 18:27:29.791283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.808 [2024-11-26 18:27:29.791296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.808 [2024-11-26 18:27:29.791336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.808 qpair failed and we were unable to recover it. 00:31:41.808 [2024-11-26 18:27:29.801172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.808 [2024-11-26 18:27:29.801259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.808 [2024-11-26 18:27:29.801284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.808 [2024-11-26 18:27:29.801299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.808 [2024-11-26 18:27:29.801321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.808 [2024-11-26 18:27:29.801351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.808 qpair failed and we were unable to recover it. 00:31:41.808 [2024-11-26 18:27:29.811196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:41.808 [2024-11-26 18:27:29.811281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:41.808 [2024-11-26 18:27:29.811323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:41.808 [2024-11-26 18:27:29.811340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:41.808 [2024-11-26 18:27:29.811353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:41.808 [2024-11-26 18:27:29.811382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:41.808 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.821254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.821361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.821387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.821401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.821414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.821443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.831265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.831390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.831415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.831430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.831442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.831472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.841291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.841384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.841410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.841425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.841437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.841466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.851318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.851409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.851438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.851455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.851468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.851504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.861359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.861442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.861468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.861483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.861496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.861525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.871404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.871502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.871527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.871541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.871554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.871583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.881405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.881495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.881520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.881534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.881547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.881575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.891532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.891617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.891642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.891656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.891669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.891697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.901451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.901583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.901608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.901622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.901634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.901663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.911589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.067 [2024-11-26 18:27:29.911679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.067 [2024-11-26 18:27:29.911704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.067 [2024-11-26 18:27:29.911718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.067 [2024-11-26 18:27:29.911731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.067 [2024-11-26 18:27:29.911759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.067 qpair failed and we were unable to recover it. 00:31:42.067 [2024-11-26 18:27:29.921558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:29.921644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:29.921669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:29.921683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:29.921695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:29.921724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:29.931577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:29.931668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:29.931693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:29.931707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:29.931720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:29.931749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:29.941584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:29.941675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:29.941701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:29.941722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:29.941736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:29.941765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:29.951623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:29.951744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:29.951769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:29.951783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:29.951796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:29.951825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:29.961675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:29.961760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:29.961785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:29.961799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:29.961812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:29.961840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:29.971680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:29.971767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:29.971792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:29.971807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:29.971820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:29.971848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:29.981713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:29.981839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:29.981864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:29.981878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:29.981890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:29.981924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:29.991739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:29.991827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:29.991853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:29.991867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:29.991880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:29.991908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:30.001741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:30.001834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:30.001863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:30.001878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:30.001891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:30.001922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:30.011847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:30.011947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:30.011975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:30.011990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:30.012003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:30.012034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:30.021872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:30.021975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:30.022001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:30.022016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:30.022029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:30.022058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:30.031878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:30.031995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:30.032029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:30.032051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:30.032067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:30.032108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:30.041882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:30.041975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.068 [2024-11-26 18:27:30.042010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.068 [2024-11-26 18:27:30.042035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.068 [2024-11-26 18:27:30.042054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.068 [2024-11-26 18:27:30.042097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.068 qpair failed and we were unable to recover it. 00:31:42.068 [2024-11-26 18:27:30.051898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.068 [2024-11-26 18:27:30.051993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.069 [2024-11-26 18:27:30.052027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.069 [2024-11-26 18:27:30.052052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.069 [2024-11-26 18:27:30.052073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.069 [2024-11-26 18:27:30.052115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.069 qpair failed and we were unable to recover it. 00:31:42.069 [2024-11-26 18:27:30.061915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.069 [2024-11-26 18:27:30.062025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.069 [2024-11-26 18:27:30.062061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.069 [2024-11-26 18:27:30.062088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.069 [2024-11-26 18:27:30.062112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.069 [2024-11-26 18:27:30.062158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.069 qpair failed and we were unable to recover it. 00:31:42.069 [2024-11-26 18:27:30.071958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.069 [2024-11-26 18:27:30.072103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.069 [2024-11-26 18:27:30.072137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.069 [2024-11-26 18:27:30.072174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.069 [2024-11-26 18:27:30.072200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.069 [2024-11-26 18:27:30.072247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.069 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.081963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.082049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.082077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.082092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.082106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.082136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.092016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.092103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.092130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.092145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.092158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.092188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.102065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.102173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.102202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.102228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.102245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.102276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.112051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.112144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.112171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.112187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.112200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.112236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.122072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.122164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.122191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.122206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.122219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.122248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.132149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.132235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.132264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.132279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.132292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.132331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.142132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.142221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.142247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.142263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.142276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.142312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.152324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.152441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.152466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.152481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.152494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.152523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.162221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.162315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.162341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.162356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.328 [2024-11-26 18:27:30.162369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.328 [2024-11-26 18:27:30.162398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.328 qpair failed and we were unable to recover it. 00:31:42.328 [2024-11-26 18:27:30.172230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.328 [2024-11-26 18:27:30.172352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.328 [2024-11-26 18:27:30.172378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.328 [2024-11-26 18:27:30.172392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.172405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.329 [2024-11-26 18:27:30.172435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.182299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.182391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.182417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.182432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.182444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.329 [2024-11-26 18:27:30.182474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.192331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.192433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.192459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.192475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.192487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.329 [2024-11-26 18:27:30.192517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.202325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.202412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.202437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.202458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.202473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.329 [2024-11-26 18:27:30.202502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.212372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.212460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.212485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.212500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.212513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.329 [2024-11-26 18:27:30.212542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.222365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.222481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.222507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.222521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.222535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.329 [2024-11-26 18:27:30.222564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.232396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.232507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.232532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.232546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.232559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.329 [2024-11-26 18:27:30.232587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.242441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.242531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.242557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.242571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.242583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xfaffa0 00:31:42.329 [2024-11-26 18:27:30.242619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.252491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.252581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.252619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.252645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.252670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5078000b90 00:31:42.329 [2024-11-26 18:27:30.252719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.262511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.262606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.262634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.262658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.262682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5078000b90 00:31:42.329 [2024-11-26 18:27:30.262754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.272581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.272720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.272752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.272768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.272783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:42.329 [2024-11-26 18:27:30.272816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.282540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.282657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.282684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.282700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.282714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5080000b90 00:31:42.329 [2024-11-26 18:27:30.282746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.282863] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:42.329 A controller has encountered a failure and is being reset. 00:31:42.329 [2024-11-26 18:27:30.292588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.292675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.292708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.329 [2024-11-26 18:27:30.292734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.329 [2024-11-26 18:27:30.292749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5074000b90 00:31:42.329 [2024-11-26 18:27:30.292784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:42.329 qpair failed and we were unable to recover it. 00:31:42.329 [2024-11-26 18:27:30.302639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:42.329 [2024-11-26 18:27:30.302723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:42.329 [2024-11-26 18:27:30.302750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:42.330 [2024-11-26 18:27:30.302764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:42.330 [2024-11-26 18:27:30.302777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5074000b90 00:31:42.330 [2024-11-26 18:27:30.302810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:42.330 qpair failed and we were unable to recover it. 00:31:42.588 Controller properly reset. 00:31:42.588 Initializing NVMe Controllers 00:31:42.588 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:42.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:42.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:42.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:42.588 Initialization complete. Launching workers. 00:31:42.588 Starting thread on core 1 00:31:42.588 Starting thread on core 2 00:31:42.588 Starting thread on core 3 00:31:42.588 Starting thread on core 0 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:42.588 00:31:42.588 real 0m10.951s 00:31:42.588 user 0m19.387s 00:31:42.588 sys 0m5.091s 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:42.588 ************************************ 00:31:42.588 END TEST nvmf_target_disconnect_tc2 00:31:42.588 ************************************ 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:42.588 rmmod nvme_tcp 00:31:42.588 rmmod nvme_fabrics 00:31:42.588 rmmod nvme_keyring 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 724750 ']' 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 724750 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 724750 ']' 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 724750 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.588 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 724750 00:31:42.846 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:42.846 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:42.846 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 724750' 00:31:42.846 killing process with pid 724750 00:31:42.846 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 724750 00:31:42.846 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 724750 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.105 18:27:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.011 18:27:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.011 00:31:45.011 real 0m16.027s 00:31:45.011 user 0m46.734s 00:31:45.011 sys 0m7.219s 00:31:45.011 18:27:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.011 18:27:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:45.011 ************************************ 00:31:45.012 END TEST nvmf_target_disconnect 00:31:45.012 ************************************ 00:31:45.012 18:27:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:45.012 00:31:45.012 real 5m6.312s 00:31:45.012 user 10m53.425s 00:31:45.012 sys 1m12.930s 00:31:45.012 18:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.012 18:27:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:45.012 ************************************ 00:31:45.012 END TEST nvmf_host 00:31:45.012 ************************************ 00:31:45.012 18:27:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:45.012 18:27:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:45.012 18:27:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:45.012 18:27:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:45.012 18:27:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.012 18:27:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:45.012 ************************************ 00:31:45.012 START TEST nvmf_target_core_interrupt_mode 00:31:45.012 ************************************ 00:31:45.012 18:27:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:45.271 * Looking for test storage... 00:31:45.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.271 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:45.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.272 --rc genhtml_branch_coverage=1 00:31:45.272 --rc genhtml_function_coverage=1 00:31:45.272 --rc genhtml_legend=1 00:31:45.272 --rc geninfo_all_blocks=1 00:31:45.272 --rc geninfo_unexecuted_blocks=1 00:31:45.272 00:31:45.272 ' 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:45.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.272 --rc genhtml_branch_coverage=1 00:31:45.272 --rc genhtml_function_coverage=1 00:31:45.272 --rc genhtml_legend=1 00:31:45.272 --rc geninfo_all_blocks=1 00:31:45.272 --rc geninfo_unexecuted_blocks=1 00:31:45.272 00:31:45.272 ' 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:45.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.272 --rc genhtml_branch_coverage=1 00:31:45.272 --rc genhtml_function_coverage=1 00:31:45.272 --rc genhtml_legend=1 00:31:45.272 --rc geninfo_all_blocks=1 00:31:45.272 --rc geninfo_unexecuted_blocks=1 00:31:45.272 00:31:45.272 ' 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:45.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.272 --rc genhtml_branch_coverage=1 00:31:45.272 --rc genhtml_function_coverage=1 00:31:45.272 --rc genhtml_legend=1 00:31:45.272 --rc geninfo_all_blocks=1 00:31:45.272 --rc geninfo_unexecuted_blocks=1 00:31:45.272 00:31:45.272 ' 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.272 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:45.273 ************************************ 00:31:45.273 START TEST nvmf_abort 00:31:45.273 ************************************ 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:45.273 * Looking for test storage... 00:31:45.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:31:45.273 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:45.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.533 --rc genhtml_branch_coverage=1 00:31:45.533 --rc genhtml_function_coverage=1 00:31:45.533 --rc genhtml_legend=1 00:31:45.533 --rc geninfo_all_blocks=1 00:31:45.533 --rc geninfo_unexecuted_blocks=1 00:31:45.533 00:31:45.533 ' 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:45.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.533 --rc genhtml_branch_coverage=1 00:31:45.533 --rc genhtml_function_coverage=1 00:31:45.533 --rc genhtml_legend=1 00:31:45.533 --rc geninfo_all_blocks=1 00:31:45.533 --rc geninfo_unexecuted_blocks=1 00:31:45.533 00:31:45.533 ' 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:45.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.533 --rc genhtml_branch_coverage=1 00:31:45.533 --rc genhtml_function_coverage=1 00:31:45.533 --rc genhtml_legend=1 00:31:45.533 --rc geninfo_all_blocks=1 00:31:45.533 --rc geninfo_unexecuted_blocks=1 00:31:45.533 00:31:45.533 ' 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:45.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.533 --rc genhtml_branch_coverage=1 00:31:45.533 --rc genhtml_function_coverage=1 00:31:45.533 --rc genhtml_legend=1 00:31:45.533 --rc geninfo_all_blocks=1 00:31:45.533 --rc geninfo_unexecuted_blocks=1 00:31:45.533 00:31:45.533 ' 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.533 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.534 18:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:48.066 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:48.066 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:48.066 Found net devices under 0000:09:00.0: cvl_0_0 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.066 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:48.067 Found net devices under 0000:09:00.1: cvl_0_1 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:31:48.067 00:31:48.067 --- 10.0.0.2 ping statistics --- 00:31:48.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.067 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:31:48.067 00:31:48.067 --- 10.0.0.1 ping statistics --- 00:31:48.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.067 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=727563 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 727563 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 727563 ']' 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.067 [2024-11-26 18:27:35.722796] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:48.067 [2024-11-26 18:27:35.723841] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:48.067 [2024-11-26 18:27:35.723896] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.067 [2024-11-26 18:27:35.797188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:48.067 [2024-11-26 18:27:35.854754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.067 [2024-11-26 18:27:35.854803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.067 [2024-11-26 18:27:35.854817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.067 [2024-11-26 18:27:35.854828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.067 [2024-11-26 18:27:35.854838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.067 [2024-11-26 18:27:35.856343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.067 [2024-11-26 18:27:35.856388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.067 [2024-11-26 18:27:35.856393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.067 [2024-11-26 18:27:35.946622] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:48.067 [2024-11-26 18:27:35.946822] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:48.067 [2024-11-26 18:27:35.946838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:48.067 [2024-11-26 18:27:35.947090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:48.067 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.068 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.068 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.068 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:48.068 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.068 18:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.068 [2024-11-26 18:27:35.997073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.068 Malloc0 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.068 Delay0 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.068 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.068 [2024-11-26 18:27:36.073338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.326 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.326 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:48.326 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.326 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:48.326 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.326 18:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:48.326 [2024-11-26 18:27:36.224446] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:50.855 Initializing NVMe Controllers 00:31:50.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:50.855 controller IO queue size 128 less than required 00:31:50.855 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:50.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:50.855 Initialization complete. Launching workers. 00:31:50.855 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29195 00:31:50.855 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29252, failed to submit 66 00:31:50.855 success 29195, unsuccessful 57, failed 0 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:50.856 rmmod nvme_tcp 00:31:50.856 rmmod nvme_fabrics 00:31:50.856 rmmod nvme_keyring 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 727563 ']' 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 727563 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 727563 ']' 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 727563 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 727563 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 727563' 00:31:50.856 killing process with pid 727563 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 727563 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 727563 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.856 18:27:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.759 00:31:52.759 real 0m7.435s 00:31:52.759 user 0m9.417s 00:31:52.759 sys 0m2.891s 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.759 ************************************ 00:31:52.759 END TEST nvmf_abort 00:31:52.759 ************************************ 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:52.759 ************************************ 00:31:52.759 START TEST nvmf_ns_hotplug_stress 00:31:52.759 ************************************ 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:52.759 * Looking for test storage... 00:31:52.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:52.759 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:53.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.020 --rc genhtml_branch_coverage=1 00:31:53.020 --rc genhtml_function_coverage=1 00:31:53.020 --rc genhtml_legend=1 00:31:53.020 --rc geninfo_all_blocks=1 00:31:53.020 --rc geninfo_unexecuted_blocks=1 00:31:53.020 00:31:53.020 ' 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:53.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.020 --rc genhtml_branch_coverage=1 00:31:53.020 --rc genhtml_function_coverage=1 00:31:53.020 --rc genhtml_legend=1 00:31:53.020 --rc geninfo_all_blocks=1 00:31:53.020 --rc geninfo_unexecuted_blocks=1 00:31:53.020 00:31:53.020 ' 00:31:53.020 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:53.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.021 --rc genhtml_branch_coverage=1 00:31:53.021 --rc genhtml_function_coverage=1 00:31:53.021 --rc genhtml_legend=1 00:31:53.021 --rc geninfo_all_blocks=1 00:31:53.021 --rc geninfo_unexecuted_blocks=1 00:31:53.021 00:31:53.021 ' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:53.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.021 --rc genhtml_branch_coverage=1 00:31:53.021 --rc genhtml_function_coverage=1 00:31:53.021 --rc genhtml_legend=1 00:31:53.021 --rc geninfo_all_blocks=1 00:31:53.021 --rc geninfo_unexecuted_blocks=1 00:31:53.021 00:31:53.021 ' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:53.021 18:27:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:54.925 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.925 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:54.926 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:54.926 Found net devices under 0000:09:00.0: cvl_0_0 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:54.926 Found net devices under 0000:09:00.1: cvl_0_1 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.926 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:31:55.185 00:31:55.185 --- 10.0.0.2 ping statistics --- 00:31:55.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.185 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:31:55.185 00:31:55.185 --- 10.0.0.1 ping statistics --- 00:31:55.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.185 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:55.185 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=729789 00:31:55.186 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:55.186 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 729789 00:31:55.186 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 729789 ']' 00:31:55.186 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.186 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.186 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.186 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.186 18:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:55.186 [2024-11-26 18:27:43.049660] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.186 [2024-11-26 18:27:43.050744] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:31:55.186 [2024-11-26 18:27:43.050811] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.186 [2024-11-26 18:27:43.126709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:55.186 [2024-11-26 18:27:43.183668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.186 [2024-11-26 18:27:43.183734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.186 [2024-11-26 18:27:43.183762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.186 [2024-11-26 18:27:43.183774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.186 [2024-11-26 18:27:43.183783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.186 [2024-11-26 18:27:43.185255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.186 [2024-11-26 18:27:43.185331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.186 [2024-11-26 18:27:43.185337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.444 [2024-11-26 18:27:43.274216] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.444 [2024-11-26 18:27:43.274444] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.444 [2024-11-26 18:27:43.274458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.444 [2024-11-26 18:27:43.274690] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:55.444 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:55.444 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:55.444 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:55.444 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:55.444 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:55.444 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:55.444 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:55.444 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:55.702 [2024-11-26 18:27:43.582050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.702 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:55.960 18:27:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.218 [2024-11-26 18:27:44.126403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.218 18:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:56.477 18:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:56.735 Malloc0 00:31:56.735 18:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:56.993 Delay0 00:31:56.993 18:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.559 18:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:57.862 NULL1 00:31:57.862 18:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:58.144 18:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=730205 00:31:58.145 18:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:58.145 18:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:31:58.145 18:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.402 18:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.660 18:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:58.660 18:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:58.919 true 00:31:58.919 18:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:31:58.919 18:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.176 18:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.433 18:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:59.433 18:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:59.691 true 00:31:59.691 18:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:31:59.691 18:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.949 18:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.207 18:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:32:00.207 18:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:32:00.464 true 00:32:00.464 18:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:00.464 18:27:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.397 Read completed with error (sct=0, sc=11) 00:32:01.654 18:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:01.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:01.912 18:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:32:01.912 18:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:32:02.170 true 00:32:02.170 18:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:02.170 18:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.428 18:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.686 18:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:32:02.686 18:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:32:02.943 true 00:32:02.943 18:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:02.943 18:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.199 18:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.456 18:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:32:03.456 18:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:32:03.714 true 00:32:03.714 18:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:03.714 18:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.646 18:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.646 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:04.904 18:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:32:04.904 18:27:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:32:05.161 true 00:32:05.161 18:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:05.161 18:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.419 18:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:05.676 18:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:32:05.676 18:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:32:05.932 true 00:32:05.932 18:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:05.932 18:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.189 18:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.446 18:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:32:06.446 18:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:32:07.011 true 00:32:07.011 18:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:07.011 18:27:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:07.942 18:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:07.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:07.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:07.942 18:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:32:07.942 18:27:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:32:08.200 true 00:32:08.457 18:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:08.457 18:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:08.715 18:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.973 18:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:32:08.973 18:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:32:09.230 true 00:32:09.230 18:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:09.230 18:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.795 18:27:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.360 18:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:32:10.360 18:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:32:10.360 true 00:32:10.360 18:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:10.360 18:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.617 18:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.874 18:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:32:10.874 18:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:32:11.132 true 00:32:11.389 18:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:11.389 18:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.647 18:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.904 18:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:32:11.904 18:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:32:12.162 true 00:32:12.162 18:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:12.162 18:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.094 18:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:13.351 18:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:32:13.351 18:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:32:13.609 true 00:32:13.609 18:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:13.609 18:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.866 18:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.123 18:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:32:14.123 18:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:32:14.380 true 00:32:14.380 18:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:14.380 18:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:14.637 18:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:14.894 18:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:14.894 18:28:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:15.151 true 00:32:15.151 18:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:15.151 18:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.082 18:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.339 18:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:16.339 18:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:16.596 true 00:32:16.596 18:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:16.596 18:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.161 18:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.161 18:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:17.161 18:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:17.418 true 00:32:17.418 18:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:17.418 18:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.984 18:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:17.984 18:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:17.984 18:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:18.242 true 00:32:18.242 18:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:18.242 18:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.615 18:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:19.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:19.615 18:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:19.615 18:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:19.872 true 00:32:19.872 18:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:19.872 18:28:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.130 18:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.388 18:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:20.388 18:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:20.646 true 00:32:20.646 18:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:20.646 18:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:20.904 18:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.162 18:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:21.162 18:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:21.423 true 00:32:21.423 18:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:21.423 18:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:22.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:22.362 18:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.619 18:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:22.619 18:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:22.877 true 00:32:22.878 18:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:22.878 18:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.444 18:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:23.444 18:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:23.444 18:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:23.702 true 00:32:23.702 18:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:23.702 18:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:23.960 18:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:24.218 18:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:24.218 18:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:24.835 true 00:32:24.835 18:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:24.835 18:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.765 18:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.765 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:25.765 18:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:25.765 18:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:26.023 true 00:32:26.023 18:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:26.023 18:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.279 18:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:26.536 18:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:26.536 18:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:26.793 true 00:32:27.051 18:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:27.051 18:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:27.309 18:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:27.565 18:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:27.565 18:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:27.822 true 00:32:27.822 18:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:27.822 18:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:28.754 Initializing NVMe Controllers 00:32:28.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:28.755 Controller IO queue size 128, less than required. 00:32:28.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.755 Controller IO queue size 128, less than required. 00:32:28.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:28.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:28.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:28.755 Initialization complete. Launching workers. 00:32:28.755 ======================================================== 00:32:28.755 Latency(us) 00:32:28.755 Device Information : IOPS MiB/s Average min max 00:32:28.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 360.07 0.18 129693.41 2670.93 1013507.03 00:32:28.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7616.15 3.72 16757.52 1613.36 451039.23 00:32:28.755 ======================================================== 00:32:28.755 Total : 7976.21 3.89 21855.73 1613.36 1013507.03 00:32:28.755 00:32:28.755 18:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:29.013 18:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:29.013 18:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:29.270 true 00:32:29.270 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 730205 00:32:29.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (730205) - No such process 00:32:29.270 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 730205 00:32:29.270 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:29.528 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:29.786 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:29.786 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:29.786 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:29.786 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:29.786 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:30.044 null0 00:32:30.044 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:30.044 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:30.044 18:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:30.302 null1 00:32:30.302 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:30.302 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:30.302 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:30.561 null2 00:32:30.561 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:30.561 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:30.561 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:30.820 null3 00:32:30.820 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:30.820 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:30.820 18:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:31.077 null4 00:32:31.077 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:31.077 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:31.077 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:31.336 null5 00:32:31.336 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:31.336 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:31.336 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:31.594 null6 00:32:31.852 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:31.852 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:31.852 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:31.852 null7 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:32.111 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 734223 734224 734225 734228 734230 734232 734234 734236 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.112 18:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:32.370 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:32.370 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.370 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:32.370 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:32.370 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:32.370 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:32.370 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:32.370 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.628 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:32.629 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:32.629 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.629 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.629 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:32.629 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:32.629 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:32.629 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:32.887 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:32.887 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:32.887 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:32.887 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:32.887 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:32.887 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:32.887 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:32.887 18:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.146 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:33.404 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:33.404 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:33.404 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:33.404 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:33.404 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:33.404 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:33.404 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:33.662 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:33.920 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:34.179 18:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:34.179 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.179 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:34.179 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:34.179 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:34.179 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:34.179 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:34.179 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.438 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:34.696 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:34.696 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:34.696 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:34.696 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:34.696 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:34.696 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:34.696 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:34.696 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:34.955 18:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:35.214 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.214 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:35.472 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:35.472 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:35.472 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:35.472 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:35.472 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:35.472 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:35.730 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:35.989 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:35.989 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:35.989 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:35.989 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:35.989 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:35.989 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:35.989 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:35.989 18:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.247 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:36.505 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:36.505 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:36.505 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:36.505 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:36.505 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:36.505 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:36.505 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:36.505 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:36.764 18:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:37.023 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:37.023 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.281 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:37.281 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:37.281 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:37.281 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:37.281 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:37.281 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:37.539 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:37.797 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:37.797 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:37.797 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:37.797 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:37.797 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:37.797 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:37.797 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:37.797 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:38.055 18:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:38.055 rmmod nvme_tcp 00:32:38.055 rmmod nvme_fabrics 00:32:38.055 rmmod nvme_keyring 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 729789 ']' 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 729789 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 729789 ']' 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 729789 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:38.055 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:38.056 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 729789 00:32:38.314 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:38.314 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 729789' 00:32:38.315 killing process with pid 729789 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 729789 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 729789 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.315 18:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:40.875 00:32:40.875 real 0m47.676s 00:32:40.875 user 3m21.146s 00:32:40.875 sys 0m21.550s 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:40.875 ************************************ 00:32:40.875 END TEST nvmf_ns_hotplug_stress 00:32:40.875 ************************************ 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:40.875 ************************************ 00:32:40.875 START TEST nvmf_delete_subsystem 00:32:40.875 ************************************ 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:40.875 * Looking for test storage... 00:32:40.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:40.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.875 --rc genhtml_branch_coverage=1 00:32:40.875 --rc genhtml_function_coverage=1 00:32:40.875 --rc genhtml_legend=1 00:32:40.875 --rc geninfo_all_blocks=1 00:32:40.875 --rc geninfo_unexecuted_blocks=1 00:32:40.875 00:32:40.875 ' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:40.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.875 --rc genhtml_branch_coverage=1 00:32:40.875 --rc genhtml_function_coverage=1 00:32:40.875 --rc genhtml_legend=1 00:32:40.875 --rc geninfo_all_blocks=1 00:32:40.875 --rc geninfo_unexecuted_blocks=1 00:32:40.875 00:32:40.875 ' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:40.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.875 --rc genhtml_branch_coverage=1 00:32:40.875 --rc genhtml_function_coverage=1 00:32:40.875 --rc genhtml_legend=1 00:32:40.875 --rc geninfo_all_blocks=1 00:32:40.875 --rc geninfo_unexecuted_blocks=1 00:32:40.875 00:32:40.875 ' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:40.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.875 --rc genhtml_branch_coverage=1 00:32:40.875 --rc genhtml_function_coverage=1 00:32:40.875 --rc genhtml_legend=1 00:32:40.875 --rc geninfo_all_blocks=1 00:32:40.875 --rc geninfo_unexecuted_blocks=1 00:32:40.875 00:32:40.875 ' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.875 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:40.876 18:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:42.819 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:42.819 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:42.819 Found net devices under 0000:09:00.0: cvl_0_0 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:42.819 Found net devices under 0000:09:00.1: cvl_0_1 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:42.819 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.820 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:32:43.079 00:32:43.079 --- 10.0.0.2 ping statistics --- 00:32:43.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.079 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:32:43.079 00:32:43.079 --- 10.0.0.1 ping statistics --- 00:32:43.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.079 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.079 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=737111 00:32:43.080 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:43.080 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 737111 00:32:43.080 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 737111 ']' 00:32:43.080 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.080 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:43.080 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.080 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:43.080 18:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.080 [2024-11-26 18:28:30.980100] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:43.080 [2024-11-26 18:28:30.981173] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:32:43.080 [2024-11-26 18:28:30.981237] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.080 [2024-11-26 18:28:31.051356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:43.338 [2024-11-26 18:28:31.109337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.338 [2024-11-26 18:28:31.109384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.338 [2024-11-26 18:28:31.109412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.338 [2024-11-26 18:28:31.109423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.338 [2024-11-26 18:28:31.109433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.338 [2024-11-26 18:28:31.110845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.338 [2024-11-26 18:28:31.110851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.338 [2024-11-26 18:28:31.200891] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:43.338 [2024-11-26 18:28:31.200959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:43.338 [2024-11-26 18:28:31.201158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.338 [2024-11-26 18:28:31.259478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.338 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.339 [2024-11-26 18:28:31.275717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.339 NULL1 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.339 Delay0 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=737132 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:43.339 18:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:43.597 [2024-11-26 18:28:31.358319] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:45.494 18:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:45.494 18:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.494 18:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 [2024-11-26 18:28:33.491193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9094000c40 is same with the state(6) to be set 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 starting I/O failed: -6 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Write completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.494 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 starting I/O failed: -6 00:32:45.495 [2024-11-26 18:28:33.492153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a4a0 is same with the state(6) to be set 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Write completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 Read completed with error (sct=0, sc=8) 00:32:45.495 [2024-11-26 18:28:33.492482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f909400d7e0 is same with the state(6) to be set 00:32:46.868 [2024-11-26 18:28:34.452835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197b9b0 is same with the state(6) to be set 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 [2024-11-26 18:28:34.485193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f909400d350 is same with the state(6) to be set 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 [2024-11-26 18:28:34.492406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a860 is same with the state(6) to be set 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 [2024-11-26 18:28:34.492672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a2c0 is same with the state(6) to be set 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Write completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 Read completed with error (sct=0, sc=8) 00:32:46.868 [2024-11-26 18:28:34.492904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197a680 is same with the state(6) to be set 00:32:46.868 Initializing NVMe Controllers 00:32:46.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:46.868 Controller IO queue size 128, less than required. 00:32:46.868 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:46.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:46.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:46.868 Initialization complete. Launching workers. 00:32:46.868 ======================================================== 00:32:46.868 Latency(us) 00:32:46.868 Device Information : IOPS MiB/s Average min max 00:32:46.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.64 0.08 962604.86 1868.48 1012328.09 00:32:46.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 144.87 0.07 920987.55 578.56 1013036.22 00:32:46.869 ======================================================== 00:32:46.869 Total : 318.51 0.16 943676.12 578.56 1013036.22 00:32:46.869 00:32:46.869 [2024-11-26 18:28:34.493856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197b9b0 (9): Bad file descriptor 00:32:46.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:46.869 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.869 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:46.869 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 737132 00:32:46.869 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 737132 00:32:47.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (737132) - No such process 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 737132 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 737132 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:47.127 18:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 737132 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:47.127 [2024-11-26 18:28:35.015653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=737634 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 737634 00:32:47.127 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:47.127 [2024-11-26 18:28:35.079550] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:47.692 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:47.692 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 737634 00:32:47.692 18:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:48.258 18:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:48.258 18:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 737634 00:32:48.258 18:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:48.823 18:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:48.823 18:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 737634 00:32:48.823 18:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:49.081 18:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:49.081 18:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 737634 00:32:49.081 18:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:49.646 18:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:49.646 18:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 737634 00:32:49.646 18:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:50.212 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:50.212 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 737634 00:32:50.212 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:50.470 Initializing NVMe Controllers 00:32:50.470 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:50.470 Controller IO queue size 128, less than required. 00:32:50.470 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:50.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:50.470 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:50.470 Initialization complete. Launching workers. 00:32:50.470 ======================================================== 00:32:50.470 Latency(us) 00:32:50.470 Device Information : IOPS MiB/s Average min max 00:32:50.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005173.86 1000234.43 1042336.80 00:32:50.470 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004867.77 1000258.45 1041392.62 00:32:50.470 ======================================================== 00:32:50.470 Total : 256.00 0.12 1005020.82 1000234.43 1042336.80 00:32:50.470 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 737634 00:32:50.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (737634) - No such process 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 737634 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.728 rmmod nvme_tcp 00:32:50.728 rmmod nvme_fabrics 00:32:50.728 rmmod nvme_keyring 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 737111 ']' 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 737111 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 737111 ']' 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 737111 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.728 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 737111 00:32:50.729 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:50.729 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:50.729 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 737111' 00:32:50.729 killing process with pid 737111 00:32:50.729 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 737111 00:32:50.729 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 737111 00:32:50.987 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:50.987 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:50.987 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:50.987 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:50.987 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:50.987 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:50.987 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:50.988 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.988 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:50.988 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.988 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.988 18:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.893 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:52.893 00:32:52.893 real 0m12.474s 00:32:52.893 user 0m24.667s 00:32:52.893 sys 0m3.970s 00:32:52.893 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:52.893 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:52.893 ************************************ 00:32:52.893 END TEST nvmf_delete_subsystem 00:32:52.893 ************************************ 00:32:53.152 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:53.152 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:53.152 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:53.152 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:53.152 ************************************ 00:32:53.152 START TEST nvmf_host_management 00:32:53.152 ************************************ 00:32:53.152 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:53.152 * Looking for test storage... 00:32:53.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:53.152 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:53.153 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:53.153 18:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:53.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.153 --rc genhtml_branch_coverage=1 00:32:53.153 --rc genhtml_function_coverage=1 00:32:53.153 --rc genhtml_legend=1 00:32:53.153 --rc geninfo_all_blocks=1 00:32:53.153 --rc geninfo_unexecuted_blocks=1 00:32:53.153 00:32:53.153 ' 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:53.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.153 --rc genhtml_branch_coverage=1 00:32:53.153 --rc genhtml_function_coverage=1 00:32:53.153 --rc genhtml_legend=1 00:32:53.153 --rc geninfo_all_blocks=1 00:32:53.153 --rc geninfo_unexecuted_blocks=1 00:32:53.153 00:32:53.153 ' 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:53.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.153 --rc genhtml_branch_coverage=1 00:32:53.153 --rc genhtml_function_coverage=1 00:32:53.153 --rc genhtml_legend=1 00:32:53.153 --rc geninfo_all_blocks=1 00:32:53.153 --rc geninfo_unexecuted_blocks=1 00:32:53.153 00:32:53.153 ' 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:53.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:53.153 --rc genhtml_branch_coverage=1 00:32:53.153 --rc genhtml_function_coverage=1 00:32:53.153 --rc genhtml_legend=1 00:32:53.153 --rc geninfo_all_blocks=1 00:32:53.153 --rc geninfo_unexecuted_blocks=1 00:32:53.153 00:32:53.153 ' 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:53.153 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:53.154 18:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:55.685 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:55.686 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:55.686 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:55.686 Found net devices under 0000:09:00.0: cvl_0_0 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:55.686 Found net devices under 0000:09:00.1: cvl_0_1 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:55.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:55.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:32:55.686 00:32:55.686 --- 10.0.0.2 ping statistics --- 00:32:55.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.686 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:55.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:55.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:32:55.686 00:32:55.686 --- 10.0.0.1 ping statistics --- 00:32:55.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:55.686 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:55.686 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=739994 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 739994 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 739994 ']' 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.687 [2024-11-26 18:28:43.438182] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:55.687 [2024-11-26 18:28:43.439279] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:32:55.687 [2024-11-26 18:28:43.439365] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.687 [2024-11-26 18:28:43.511572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:55.687 [2024-11-26 18:28:43.567251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.687 [2024-11-26 18:28:43.567324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.687 [2024-11-26 18:28:43.567339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.687 [2024-11-26 18:28:43.567364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.687 [2024-11-26 18:28:43.567373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.687 [2024-11-26 18:28:43.569048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.687 [2024-11-26 18:28:43.569175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.687 [2024-11-26 18:28:43.569241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:55.687 [2024-11-26 18:28:43.569245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.687 [2024-11-26 18:28:43.655005] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:55.687 [2024-11-26 18:28:43.655182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:55.687 [2024-11-26 18:28:43.655517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:55.687 [2024-11-26 18:28:43.656164] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:55.687 [2024-11-26 18:28:43.656429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.687 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.946 [2024-11-26 18:28:43.709965] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.946 Malloc0 00:32:55.946 [2024-11-26 18:28:43.786133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=740042 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 740042 /var/tmp/bdevperf.sock 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 740042 ']' 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:55.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:55.946 { 00:32:55.946 "params": { 00:32:55.946 "name": "Nvme$subsystem", 00:32:55.946 "trtype": "$TEST_TRANSPORT", 00:32:55.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:55.946 "adrfam": "ipv4", 00:32:55.946 "trsvcid": "$NVMF_PORT", 00:32:55.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:55.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:55.946 "hdgst": ${hdgst:-false}, 00:32:55.946 "ddgst": ${ddgst:-false} 00:32:55.946 }, 00:32:55.946 "method": "bdev_nvme_attach_controller" 00:32:55.946 } 00:32:55.946 EOF 00:32:55.946 )") 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:55.946 18:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:55.946 "params": { 00:32:55.946 "name": "Nvme0", 00:32:55.946 "trtype": "tcp", 00:32:55.946 "traddr": "10.0.0.2", 00:32:55.946 "adrfam": "ipv4", 00:32:55.946 "trsvcid": "4420", 00:32:55.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:55.946 "hdgst": false, 00:32:55.946 "ddgst": false 00:32:55.946 }, 00:32:55.946 "method": "bdev_nvme_attach_controller" 00:32:55.946 }' 00:32:55.946 [2024-11-26 18:28:43.870559] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:32:55.946 [2024-11-26 18:28:43.870664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740042 ] 00:32:55.946 [2024-11-26 18:28:43.940707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.204 [2024-11-26 18:28:44.001026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.462 Running I/O for 10 seconds... 00:32:56.462 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.462 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:56.462 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:56.462 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.462 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:32:56.463 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 [2024-11-26 18:28:44.617998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.722 [2024-11-26 18:28:44.618292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b73a0 is same with the state(6) to be set 00:32:56.723 [2024-11-26 18:28:44.618760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.618801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.618831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.618847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.618863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.618877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.618891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.618906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.618921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.618935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.618949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.618963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.618978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.723 [2024-11-26 18:28:44.619697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.723 [2024-11-26 18:28:44.619711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.619977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.619992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.724 [2024-11-26 18:28:44.620718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.724 [2024-11-26 18:28:44.620752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:32:56.724 [2024-11-26 18:28:44.621954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:56.724 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.724 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:56.724 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.724 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:56.724 task offset: 81920 on job bdev=Nvme0n1 fails 00:32:56.724 00:32:56.724 Latency(us) 00:32:56.724 [2024-11-26T17:28:44.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.724 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:56.724 Job: Nvme0n1 ended in about 0.39 seconds with error 00:32:56.724 Verification LBA range: start 0x0 length 0x400 00:32:56.724 Nvme0n1 : 0.39 1621.37 101.34 162.14 0.00 34836.01 2997.67 34758.35 00:32:56.724 [2024-11-26T17:28:44.735Z] =================================================================================================================== 00:32:56.724 [2024-11-26T17:28:44.735Z] Total : 1621.37 101.34 162.14 0.00 34836.01 2997.67 34758.35 00:32:56.725 [2024-11-26 18:28:44.623872] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:56.725 [2024-11-26 18:28:44.623899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181fa50 (9): Bad file descriptor 00:32:56.725 [2024-11-26 18:28:44.625034] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:56.725 [2024-11-26 18:28:44.625133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:56.725 [2024-11-26 18:28:44.625160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.725 [2024-11-26 18:28:44.625186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:56.725 [2024-11-26 18:28:44.625208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:56.725 [2024-11-26 18:28:44.625222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:56.725 [2024-11-26 18:28:44.625234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x181fa50 00:32:56.725 [2024-11-26 18:28:44.625268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181fa50 (9): Bad file descriptor 00:32:56.725 [2024-11-26 18:28:44.625299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:56.725 [2024-11-26 18:28:44.625326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:56.725 [2024-11-26 18:28:44.625341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:56.725 [2024-11-26 18:28:44.625357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:56.725 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.725 18:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 740042 00:32:57.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (740042) - No such process 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:57.659 { 00:32:57.659 "params": { 00:32:57.659 "name": "Nvme$subsystem", 00:32:57.659 "trtype": "$TEST_TRANSPORT", 00:32:57.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.659 "adrfam": "ipv4", 00:32:57.659 "trsvcid": "$NVMF_PORT", 00:32:57.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.659 "hdgst": ${hdgst:-false}, 00:32:57.659 "ddgst": ${ddgst:-false} 00:32:57.659 }, 00:32:57.659 "method": "bdev_nvme_attach_controller" 00:32:57.659 } 00:32:57.659 EOF 00:32:57.659 )") 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:57.659 18:28:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:57.659 "params": { 00:32:57.659 "name": "Nvme0", 00:32:57.659 "trtype": "tcp", 00:32:57.659 "traddr": "10.0.0.2", 00:32:57.659 "adrfam": "ipv4", 00:32:57.659 "trsvcid": "4420", 00:32:57.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:57.659 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:57.659 "hdgst": false, 00:32:57.659 "ddgst": false 00:32:57.659 }, 00:32:57.659 "method": "bdev_nvme_attach_controller" 00:32:57.659 }' 00:32:57.917 [2024-11-26 18:28:45.679996] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:32:57.917 [2024-11-26 18:28:45.680082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740312 ] 00:32:57.917 [2024-11-26 18:28:45.749705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.917 [2024-11-26 18:28:45.810595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.174 Running I/O for 1 seconds... 00:32:59.553 1664.00 IOPS, 104.00 MiB/s 00:32:59.553 Latency(us) 00:32:59.553 [2024-11-26T17:28:47.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.553 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:59.553 Verification LBA range: start 0x0 length 0x400 00:32:59.553 Nvme0n1 : 1.03 1680.91 105.06 0.00 0.00 37457.27 5024.43 33204.91 00:32:59.553 [2024-11-26T17:28:47.565Z] =================================================================================================================== 00:32:59.554 [2024-11-26T17:28:47.565Z] Total : 1680.91 105.06 0.00 0.00 37457.27 5024.43 33204.91 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.554 rmmod nvme_tcp 00:32:59.554 rmmod nvme_fabrics 00:32:59.554 rmmod nvme_keyring 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 739994 ']' 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 739994 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 739994 ']' 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 739994 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 739994 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:59.554 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 739994' 00:32:59.554 killing process with pid 739994 00:32:59.555 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 739994 00:32:59.555 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 739994 00:32:59.814 [2024-11-26 18:28:47.756243] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.814 18:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:33:02.351 00:33:02.351 real 0m8.900s 00:33:02.351 user 0m17.873s 00:33:02.351 sys 0m3.759s 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:33:02.351 ************************************ 00:33:02.351 END TEST nvmf_host_management 00:33:02.351 ************************************ 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:02.351 ************************************ 00:33:02.351 START TEST nvmf_lvol 00:33:02.351 ************************************ 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:33:02.351 * Looking for test storage... 00:33:02.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:33:02.351 18:28:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:02.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.351 --rc genhtml_branch_coverage=1 00:33:02.351 --rc genhtml_function_coverage=1 00:33:02.351 --rc genhtml_legend=1 00:33:02.351 --rc geninfo_all_blocks=1 00:33:02.351 --rc geninfo_unexecuted_blocks=1 00:33:02.351 00:33:02.351 ' 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:02.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.351 --rc genhtml_branch_coverage=1 00:33:02.351 --rc genhtml_function_coverage=1 00:33:02.351 --rc genhtml_legend=1 00:33:02.351 --rc geninfo_all_blocks=1 00:33:02.351 --rc geninfo_unexecuted_blocks=1 00:33:02.351 00:33:02.351 ' 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:02.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.351 --rc genhtml_branch_coverage=1 00:33:02.351 --rc genhtml_function_coverage=1 00:33:02.351 --rc genhtml_legend=1 00:33:02.351 --rc geninfo_all_blocks=1 00:33:02.351 --rc geninfo_unexecuted_blocks=1 00:33:02.351 00:33:02.351 ' 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:02.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.351 --rc genhtml_branch_coverage=1 00:33:02.351 --rc genhtml_function_coverage=1 00:33:02.351 --rc genhtml_legend=1 00:33:02.351 --rc geninfo_all_blocks=1 00:33:02.351 --rc geninfo_unexecuted_blocks=1 00:33:02.351 00:33:02.351 ' 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.351 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.352 18:28:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:04.256 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:04.256 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.256 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:04.257 Found net devices under 0000:09:00.0: cvl_0_0 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:04.257 Found net devices under 0000:09:00.1: cvl_0_1 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:04.257 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:04.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:04.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:33:04.516 00:33:04.516 --- 10.0.0.2 ping statistics --- 00:33:04.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.516 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:04.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:04.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:33:04.516 00:33:04.516 --- 10.0.0.1 ping statistics --- 00:33:04.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.516 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=742515 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 742515 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 742515 ']' 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:04.516 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:04.516 [2024-11-26 18:28:52.381685] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:04.516 [2024-11-26 18:28:52.382764] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:33:04.516 [2024-11-26 18:28:52.382840] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.516 [2024-11-26 18:28:52.457227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:04.516 [2024-11-26 18:28:52.514402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.516 [2024-11-26 18:28:52.514449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.516 [2024-11-26 18:28:52.514463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.516 [2024-11-26 18:28:52.514474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.516 [2024-11-26 18:28:52.514483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.516 [2024-11-26 18:28:52.515807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.516 [2024-11-26 18:28:52.515866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:04.516 [2024-11-26 18:28:52.515869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.774 [2024-11-26 18:28:52.602327] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:04.775 [2024-11-26 18:28:52.602571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:04.775 [2024-11-26 18:28:52.602612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:04.775 [2024-11-26 18:28:52.602828] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:04.775 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:04.775 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:33:04.775 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:04.775 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:04.775 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:04.775 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.775 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:05.048 [2024-11-26 18:28:52.908541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.048 18:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:05.306 18:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:33:05.306 18:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:05.603 18:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:33:05.603 18:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:33:05.889 18:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:33:06.146 18:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ff9a77b9-17ff-441d-9479-5429d7827a62 00:33:06.146 18:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ff9a77b9-17ff-441d-9479-5429d7827a62 lvol 20 00:33:06.404 18:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=39242d7f-30e3-4500-9ddb-cb5c67646932 00:33:06.404 18:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:06.662 18:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 39242d7f-30e3-4500-9ddb-cb5c67646932 00:33:07.227 18:28:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:07.227 [2024-11-26 18:28:55.172680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.227 18:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:07.485 18:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=742889 00:33:07.485 18:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:33:07.485 18:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:33:08.866 18:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 39242d7f-30e3-4500-9ddb-cb5c67646932 MY_SNAPSHOT 00:33:08.866 18:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6094b445-a758-4ac5-8106-d7ae68f2f521 00:33:08.866 18:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 39242d7f-30e3-4500-9ddb-cb5c67646932 30 00:33:09.124 18:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6094b445-a758-4ac5-8106-d7ae68f2f521 MY_CLONE 00:33:09.689 18:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=20e3d77b-b1d1-41ac-b8d6-5b5f52b261e5 00:33:09.689 18:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 20e3d77b-b1d1-41ac-b8d6-5b5f52b261e5 00:33:09.947 18:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 742889 00:33:18.050 Initializing NVMe Controllers 00:33:18.050 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:18.050 Controller IO queue size 128, less than required. 00:33:18.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:18.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:18.050 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:18.050 Initialization complete. Launching workers. 00:33:18.050 ======================================================== 00:33:18.050 Latency(us) 00:33:18.050 Device Information : IOPS MiB/s Average min max 00:33:18.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10619.10 41.48 12060.97 4917.02 73489.23 00:33:18.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10419.70 40.70 12290.68 6984.04 75836.87 00:33:18.051 ======================================================== 00:33:18.051 Total : 21038.79 82.18 12174.73 4917.02 75836.87 00:33:18.051 00:33:18.051 18:29:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:18.309 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 39242d7f-30e3-4500-9ddb-cb5c67646932 00:33:18.566 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff9a77b9-17ff-441d-9479-5429d7827a62 00:33:18.823 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:18.823 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:18.823 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:18.823 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.824 rmmod nvme_tcp 00:33:18.824 rmmod nvme_fabrics 00:33:18.824 rmmod nvme_keyring 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 742515 ']' 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 742515 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 742515 ']' 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 742515 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 742515 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 742515' 00:33:18.824 killing process with pid 742515 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 742515 00:33:18.824 18:29:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 742515 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.082 18:29:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:21.611 00:33:21.611 real 0m19.159s 00:33:21.611 user 0m56.588s 00:33:21.611 sys 0m7.437s 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:21.611 ************************************ 00:33:21.611 END TEST nvmf_lvol 00:33:21.611 ************************************ 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:21.611 ************************************ 00:33:21.611 START TEST nvmf_lvs_grow 00:33:21.611 ************************************ 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:21.611 * Looking for test storage... 00:33:21.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.611 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.612 --rc genhtml_branch_coverage=1 00:33:21.612 --rc genhtml_function_coverage=1 00:33:21.612 --rc genhtml_legend=1 00:33:21.612 --rc geninfo_all_blocks=1 00:33:21.612 --rc geninfo_unexecuted_blocks=1 00:33:21.612 00:33:21.612 ' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.612 --rc genhtml_branch_coverage=1 00:33:21.612 --rc genhtml_function_coverage=1 00:33:21.612 --rc genhtml_legend=1 00:33:21.612 --rc geninfo_all_blocks=1 00:33:21.612 --rc geninfo_unexecuted_blocks=1 00:33:21.612 00:33:21.612 ' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.612 --rc genhtml_branch_coverage=1 00:33:21.612 --rc genhtml_function_coverage=1 00:33:21.612 --rc genhtml_legend=1 00:33:21.612 --rc geninfo_all_blocks=1 00:33:21.612 --rc geninfo_unexecuted_blocks=1 00:33:21.612 00:33:21.612 ' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.612 --rc genhtml_branch_coverage=1 00:33:21.612 --rc genhtml_function_coverage=1 00:33:21.612 --rc genhtml_legend=1 00:33:21.612 --rc geninfo_all_blocks=1 00:33:21.612 --rc geninfo_unexecuted_blocks=1 00:33:21.612 00:33:21.612 ' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:21.612 18:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:23.515 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:23.515 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:23.515 Found net devices under 0000:09:00.0: cvl_0_0 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.515 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:23.516 Found net devices under 0000:09:00.1: cvl_0_1 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:33:23.516 00:33:23.516 --- 10.0.0.2 ping statistics --- 00:33:23.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.516 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:33:23.516 00:33:23.516 --- 10.0.0.1 ping statistics --- 00:33:23.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.516 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=746199 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 746199 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 746199 ']' 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.516 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:23.516 [2024-11-26 18:29:11.503479] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:23.516 [2024-11-26 18:29:11.504558] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:33:23.516 [2024-11-26 18:29:11.504629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.774 [2024-11-26 18:29:11.577446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.774 [2024-11-26 18:29:11.633753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.774 [2024-11-26 18:29:11.633806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.774 [2024-11-26 18:29:11.633830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.774 [2024-11-26 18:29:11.633841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.774 [2024-11-26 18:29:11.633850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.774 [2024-11-26 18:29:11.634449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.774 [2024-11-26 18:29:11.723427] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:23.774 [2024-11-26 18:29:11.723740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:23.774 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:23.774 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:23.774 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.774 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:23.774 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:23.774 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.775 18:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:24.343 [2024-11-26 18:29:12.083092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:24.343 ************************************ 00:33:24.343 START TEST lvs_grow_clean 00:33:24.343 ************************************ 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:24.343 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:24.344 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:24.344 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:24.344 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:24.344 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:24.602 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:24.602 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:24.870 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=11a92089-1677-49d7-8182-dedc5228463a 00:33:24.870 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:24.870 18:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:25.137 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:25.137 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:25.137 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 11a92089-1677-49d7-8182-dedc5228463a lvol 150 00:33:25.395 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=955b2ab9-49fc-4e32-aaa3-bf9beef76807 00:33:25.396 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:25.396 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:25.655 [2024-11-26 18:29:13.598959] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:25.655 [2024-11-26 18:29:13.599058] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:25.655 true 00:33:25.655 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:25.655 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:25.913 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:25.913 18:29:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:26.171 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 955b2ab9-49fc-4e32-aaa3-bf9beef76807 00:33:26.738 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:26.738 [2024-11-26 18:29:14.703335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.738 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:26.997 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=746633 00:33:26.997 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:26.997 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:26.997 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 746633 /var/tmp/bdevperf.sock 00:33:26.997 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 746633 ']' 00:33:26.997 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:26.997 18:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.997 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:26.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:26.997 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.997 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:27.256 [2024-11-26 18:29:15.043610] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:33:27.256 [2024-11-26 18:29:15.043680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746633 ] 00:33:27.256 [2024-11-26 18:29:15.110073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.256 [2024-11-26 18:29:15.170258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.514 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.514 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:27.514 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:27.773 Nvme0n1 00:33:27.773 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:28.032 [ 00:33:28.032 { 00:33:28.032 "name": "Nvme0n1", 00:33:28.032 "aliases": [ 00:33:28.032 "955b2ab9-49fc-4e32-aaa3-bf9beef76807" 00:33:28.032 ], 00:33:28.032 "product_name": "NVMe disk", 00:33:28.032 "block_size": 4096, 00:33:28.032 "num_blocks": 38912, 00:33:28.032 "uuid": "955b2ab9-49fc-4e32-aaa3-bf9beef76807", 00:33:28.032 "numa_id": 0, 00:33:28.032 "assigned_rate_limits": { 00:33:28.032 "rw_ios_per_sec": 0, 00:33:28.032 "rw_mbytes_per_sec": 0, 00:33:28.032 "r_mbytes_per_sec": 0, 00:33:28.032 "w_mbytes_per_sec": 0 00:33:28.032 }, 00:33:28.032 "claimed": false, 00:33:28.032 "zoned": false, 00:33:28.032 "supported_io_types": { 00:33:28.032 "read": true, 00:33:28.032 "write": true, 00:33:28.032 "unmap": true, 00:33:28.032 "flush": true, 00:33:28.032 "reset": true, 00:33:28.032 "nvme_admin": true, 00:33:28.032 "nvme_io": true, 00:33:28.032 "nvme_io_md": false, 00:33:28.032 "write_zeroes": true, 00:33:28.032 "zcopy": false, 00:33:28.032 "get_zone_info": false, 00:33:28.032 "zone_management": false, 00:33:28.032 "zone_append": false, 00:33:28.032 "compare": true, 00:33:28.032 "compare_and_write": true, 00:33:28.032 "abort": true, 00:33:28.032 "seek_hole": false, 00:33:28.032 "seek_data": false, 00:33:28.032 "copy": true, 00:33:28.032 "nvme_iov_md": false 00:33:28.032 }, 00:33:28.032 "memory_domains": [ 00:33:28.032 { 00:33:28.032 "dma_device_id": "system", 00:33:28.032 "dma_device_type": 1 00:33:28.032 } 00:33:28.032 ], 00:33:28.032 "driver_specific": { 00:33:28.032 "nvme": [ 00:33:28.032 { 00:33:28.032 "trid": { 00:33:28.032 "trtype": "TCP", 00:33:28.032 "adrfam": "IPv4", 00:33:28.032 "traddr": "10.0.0.2", 00:33:28.032 "trsvcid": "4420", 00:33:28.032 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:28.032 }, 00:33:28.032 "ctrlr_data": { 00:33:28.032 "cntlid": 1, 00:33:28.032 "vendor_id": "0x8086", 00:33:28.032 "model_number": "SPDK bdev Controller", 00:33:28.032 "serial_number": "SPDK0", 00:33:28.032 "firmware_revision": "25.01", 00:33:28.032 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:28.032 "oacs": { 00:33:28.032 "security": 0, 00:33:28.032 "format": 0, 00:33:28.032 "firmware": 0, 00:33:28.032 "ns_manage": 0 00:33:28.032 }, 00:33:28.032 "multi_ctrlr": true, 00:33:28.032 "ana_reporting": false 00:33:28.032 }, 00:33:28.032 "vs": { 00:33:28.032 "nvme_version": "1.3" 00:33:28.032 }, 00:33:28.032 "ns_data": { 00:33:28.032 "id": 1, 00:33:28.032 "can_share": true 00:33:28.032 } 00:33:28.032 } 00:33:28.032 ], 00:33:28.032 "mp_policy": "active_passive" 00:33:28.032 } 00:33:28.032 } 00:33:28.032 ] 00:33:28.032 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=746765 00:33:28.032 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:28.032 18:29:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:28.290 Running I/O for 10 seconds... 00:33:29.223 Latency(us) 00:33:29.223 [2024-11-26T17:29:17.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.223 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:33:29.223 [2024-11-26T17:29:17.234Z] =================================================================================================================== 00:33:29.223 [2024-11-26T17:29:17.234Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:33:29.223 00:33:30.156 18:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:30.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.156 Nvme0n1 : 2.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:33:30.156 [2024-11-26T17:29:18.167Z] =================================================================================================================== 00:33:30.156 [2024-11-26T17:29:18.167Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:33:30.156 00:33:30.413 true 00:33:30.413 18:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:30.413 18:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:30.672 18:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:30.672 18:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:30.672 18:29:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 746765 00:33:31.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:31.275 Nvme0n1 : 3.00 15197.67 59.37 0.00 0.00 0.00 0.00 0.00 00:33:31.275 [2024-11-26T17:29:19.286Z] =================================================================================================================== 00:33:31.275 [2024-11-26T17:29:19.286Z] Total : 15197.67 59.37 0.00 0.00 0.00 0.00 0.00 00:33:31.275 00:33:32.207 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:32.207 Nvme0n1 : 4.00 15224.25 59.47 0.00 0.00 0.00 0.00 0.00 00:33:32.207 [2024-11-26T17:29:20.218Z] =================================================================================================================== 00:33:32.207 [2024-11-26T17:29:20.218Z] Total : 15224.25 59.47 0.00 0.00 0.00 0.00 0.00 00:33:32.207 00:33:33.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:33.141 Nvme0n1 : 5.00 15290.80 59.73 0.00 0.00 0.00 0.00 0.00 00:33:33.141 [2024-11-26T17:29:21.152Z] =================================================================================================================== 00:33:33.141 [2024-11-26T17:29:21.152Z] Total : 15290.80 59.73 0.00 0.00 0.00 0.00 0.00 00:33:33.141 00:33:34.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:34.515 Nvme0n1 : 6.00 15345.83 59.94 0.00 0.00 0.00 0.00 0.00 00:33:34.515 [2024-11-26T17:29:22.526Z] =================================================================================================================== 00:33:34.515 [2024-11-26T17:29:22.526Z] Total : 15345.83 59.94 0.00 0.00 0.00 0.00 0.00 00:33:34.515 00:33:35.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.449 Nvme0n1 : 7.00 15403.29 60.17 0.00 0.00 0.00 0.00 0.00 00:33:35.449 [2024-11-26T17:29:23.460Z] =================================================================================================================== 00:33:35.449 [2024-11-26T17:29:23.460Z] Total : 15403.29 60.17 0.00 0.00 0.00 0.00 0.00 00:33:35.449 00:33:36.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.381 Nvme0n1 : 8.00 15446.38 60.34 0.00 0.00 0.00 0.00 0.00 00:33:36.381 [2024-11-26T17:29:24.392Z] =================================================================================================================== 00:33:36.381 [2024-11-26T17:29:24.392Z] Total : 15446.38 60.34 0.00 0.00 0.00 0.00 0.00 00:33:36.381 00:33:37.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:37.315 Nvme0n1 : 9.00 15479.89 60.47 0.00 0.00 0.00 0.00 0.00 00:33:37.315 [2024-11-26T17:29:25.326Z] =================================================================================================================== 00:33:37.315 [2024-11-26T17:29:25.326Z] Total : 15479.89 60.47 0.00 0.00 0.00 0.00 0.00 00:33:37.315 00:33:38.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:38.249 Nvme0n1 : 10.00 15513.10 60.60 0.00 0.00 0.00 0.00 0.00 00:33:38.249 [2024-11-26T17:29:26.260Z] =================================================================================================================== 00:33:38.249 [2024-11-26T17:29:26.260Z] Total : 15513.10 60.60 0.00 0.00 0.00 0.00 0.00 00:33:38.249 00:33:38.249 00:33:38.249 Latency(us) 00:33:38.249 [2024-11-26T17:29:26.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:38.249 Nvme0n1 : 10.01 15517.06 60.61 0.00 0.00 8243.68 4708.88 18155.90 00:33:38.249 [2024-11-26T17:29:26.260Z] =================================================================================================================== 00:33:38.249 [2024-11-26T17:29:26.260Z] Total : 15517.06 60.61 0.00 0.00 8243.68 4708.88 18155.90 00:33:38.249 { 00:33:38.249 "results": [ 00:33:38.249 { 00:33:38.249 "job": "Nvme0n1", 00:33:38.249 "core_mask": "0x2", 00:33:38.249 "workload": "randwrite", 00:33:38.249 "status": "finished", 00:33:38.249 "queue_depth": 128, 00:33:38.249 "io_size": 4096, 00:33:38.249 "runtime": 10.0057, 00:33:38.249 "iops": 15517.05527849126, 00:33:38.249 "mibps": 60.61349718160648, 00:33:38.249 "io_failed": 0, 00:33:38.249 "io_timeout": 0, 00:33:38.249 "avg_latency_us": 8243.682773325241, 00:33:38.249 "min_latency_us": 4708.882962962963, 00:33:38.249 "max_latency_us": 18155.89925925926 00:33:38.249 } 00:33:38.249 ], 00:33:38.249 "core_count": 1 00:33:38.249 } 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 746633 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 746633 ']' 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 746633 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 746633 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 746633' 00:33:38.249 killing process with pid 746633 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 746633 00:33:38.249 Received shutdown signal, test time was about 10.000000 seconds 00:33:38.249 00:33:38.249 Latency(us) 00:33:38.249 [2024-11-26T17:29:26.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.249 [2024-11-26T17:29:26.260Z] =================================================================================================================== 00:33:38.249 [2024-11-26T17:29:26.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.249 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 746633 00:33:38.508 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:38.767 18:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:39.333 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:39.333 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:39.333 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:39.333 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:39.333 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:39.591 [2024-11-26 18:29:27.578999] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:39.849 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:40.109 request: 00:33:40.109 { 00:33:40.109 "uuid": "11a92089-1677-49d7-8182-dedc5228463a", 00:33:40.109 "method": "bdev_lvol_get_lvstores", 00:33:40.109 "req_id": 1 00:33:40.109 } 00:33:40.109 Got JSON-RPC error response 00:33:40.109 response: 00:33:40.109 { 00:33:40.109 "code": -19, 00:33:40.109 "message": "No such device" 00:33:40.109 } 00:33:40.109 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:40.109 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:40.109 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:40.109 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:40.109 18:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:40.367 aio_bdev 00:33:40.367 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 955b2ab9-49fc-4e32-aaa3-bf9beef76807 00:33:40.367 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=955b2ab9-49fc-4e32-aaa3-bf9beef76807 00:33:40.367 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:40.367 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:40.367 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:40.367 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:40.367 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:40.625 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 955b2ab9-49fc-4e32-aaa3-bf9beef76807 -t 2000 00:33:41.190 [ 00:33:41.190 { 00:33:41.190 "name": "955b2ab9-49fc-4e32-aaa3-bf9beef76807", 00:33:41.190 "aliases": [ 00:33:41.190 "lvs/lvol" 00:33:41.190 ], 00:33:41.190 "product_name": "Logical Volume", 00:33:41.190 "block_size": 4096, 00:33:41.190 "num_blocks": 38912, 00:33:41.190 "uuid": "955b2ab9-49fc-4e32-aaa3-bf9beef76807", 00:33:41.190 "assigned_rate_limits": { 00:33:41.190 "rw_ios_per_sec": 0, 00:33:41.190 "rw_mbytes_per_sec": 0, 00:33:41.190 "r_mbytes_per_sec": 0, 00:33:41.190 "w_mbytes_per_sec": 0 00:33:41.190 }, 00:33:41.190 "claimed": false, 00:33:41.190 "zoned": false, 00:33:41.190 "supported_io_types": { 00:33:41.190 "read": true, 00:33:41.190 "write": true, 00:33:41.190 "unmap": true, 00:33:41.190 "flush": false, 00:33:41.190 "reset": true, 00:33:41.190 "nvme_admin": false, 00:33:41.190 "nvme_io": false, 00:33:41.190 "nvme_io_md": false, 00:33:41.190 "write_zeroes": true, 00:33:41.190 "zcopy": false, 00:33:41.190 "get_zone_info": false, 00:33:41.190 "zone_management": false, 00:33:41.190 "zone_append": false, 00:33:41.190 "compare": false, 00:33:41.190 "compare_and_write": false, 00:33:41.190 "abort": false, 00:33:41.190 "seek_hole": true, 00:33:41.190 "seek_data": true, 00:33:41.190 "copy": false, 00:33:41.190 "nvme_iov_md": false 00:33:41.190 }, 00:33:41.190 "driver_specific": { 00:33:41.190 "lvol": { 00:33:41.190 "lvol_store_uuid": "11a92089-1677-49d7-8182-dedc5228463a", 00:33:41.190 "base_bdev": "aio_bdev", 00:33:41.190 "thin_provision": false, 00:33:41.190 "num_allocated_clusters": 38, 00:33:41.190 "snapshot": false, 00:33:41.190 "clone": false, 00:33:41.190 "esnap_clone": false 00:33:41.190 } 00:33:41.190 } 00:33:41.190 } 00:33:41.190 ] 00:33:41.190 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:41.190 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:41.190 18:29:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:41.447 18:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:41.447 18:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:41.447 18:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:41.705 18:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:41.705 18:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 955b2ab9-49fc-4e32-aaa3-bf9beef76807 00:33:41.963 18:29:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11a92089-1677-49d7-8182-dedc5228463a 00:33:42.221 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:42.479 00:33:42.479 real 0m18.234s 00:33:42.479 user 0m17.774s 00:33:42.479 sys 0m1.928s 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:42.479 ************************************ 00:33:42.479 END TEST lvs_grow_clean 00:33:42.479 ************************************ 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:42.479 ************************************ 00:33:42.479 START TEST lvs_grow_dirty 00:33:42.479 ************************************ 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:42.479 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:42.737 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:42.737 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:42.996 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:42.996 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:42.996 18:29:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:43.561 18:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:43.561 18:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:43.561 18:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 lvol 150 00:33:43.561 18:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=aa6ba715-7a09-4dcf-97c0-b33f561015f3 00:33:43.561 18:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:43.561 18:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:43.819 [2024-11-26 18:29:31.798969] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:43.819 [2024-11-26 18:29:31.799063] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:43.819 true 00:33:43.819 18:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:43.819 18:29:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:44.386 18:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:44.386 18:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:44.386 18:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa6ba715-7a09-4dcf-97c0-b33f561015f3 00:33:44.644 18:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:44.902 [2024-11-26 18:29:32.891254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:44.902 18:29:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=748798 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 748798 /var/tmp/bdevperf.sock 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 748798 ']' 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:45.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:45.469 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:45.469 [2024-11-26 18:29:33.218954] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:33:45.469 [2024-11-26 18:29:33.219026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid748798 ] 00:33:45.470 [2024-11-26 18:29:33.286543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.470 [2024-11-26 18:29:33.345069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.470 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.470 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:45.470 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:46.035 Nvme0n1 00:33:46.036 18:29:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:46.294 [ 00:33:46.294 { 00:33:46.294 "name": "Nvme0n1", 00:33:46.294 "aliases": [ 00:33:46.294 "aa6ba715-7a09-4dcf-97c0-b33f561015f3" 00:33:46.294 ], 00:33:46.294 "product_name": "NVMe disk", 00:33:46.294 "block_size": 4096, 00:33:46.294 "num_blocks": 38912, 00:33:46.294 "uuid": "aa6ba715-7a09-4dcf-97c0-b33f561015f3", 00:33:46.294 "numa_id": 0, 00:33:46.294 "assigned_rate_limits": { 00:33:46.294 "rw_ios_per_sec": 0, 00:33:46.294 "rw_mbytes_per_sec": 0, 00:33:46.294 "r_mbytes_per_sec": 0, 00:33:46.294 "w_mbytes_per_sec": 0 00:33:46.294 }, 00:33:46.294 "claimed": false, 00:33:46.294 "zoned": false, 00:33:46.294 "supported_io_types": { 00:33:46.294 "read": true, 00:33:46.294 "write": true, 00:33:46.294 "unmap": true, 00:33:46.294 "flush": true, 00:33:46.294 "reset": true, 00:33:46.294 "nvme_admin": true, 00:33:46.294 "nvme_io": true, 00:33:46.294 "nvme_io_md": false, 00:33:46.294 "write_zeroes": true, 00:33:46.294 "zcopy": false, 00:33:46.294 "get_zone_info": false, 00:33:46.294 "zone_management": false, 00:33:46.294 "zone_append": false, 00:33:46.294 "compare": true, 00:33:46.294 "compare_and_write": true, 00:33:46.294 "abort": true, 00:33:46.294 "seek_hole": false, 00:33:46.294 "seek_data": false, 00:33:46.294 "copy": true, 00:33:46.294 "nvme_iov_md": false 00:33:46.294 }, 00:33:46.294 "memory_domains": [ 00:33:46.294 { 00:33:46.294 "dma_device_id": "system", 00:33:46.294 "dma_device_type": 1 00:33:46.294 } 00:33:46.294 ], 00:33:46.294 "driver_specific": { 00:33:46.294 "nvme": [ 00:33:46.294 { 00:33:46.294 "trid": { 00:33:46.294 "trtype": "TCP", 00:33:46.294 "adrfam": "IPv4", 00:33:46.294 "traddr": "10.0.0.2", 00:33:46.294 "trsvcid": "4420", 00:33:46.294 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:46.294 }, 00:33:46.294 "ctrlr_data": { 00:33:46.294 "cntlid": 1, 00:33:46.294 "vendor_id": "0x8086", 00:33:46.294 "model_number": "SPDK bdev Controller", 00:33:46.294 "serial_number": "SPDK0", 00:33:46.294 "firmware_revision": "25.01", 00:33:46.294 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:46.294 "oacs": { 00:33:46.294 "security": 0, 00:33:46.294 "format": 0, 00:33:46.294 "firmware": 0, 00:33:46.294 "ns_manage": 0 00:33:46.294 }, 00:33:46.294 "multi_ctrlr": true, 00:33:46.294 "ana_reporting": false 00:33:46.294 }, 00:33:46.294 "vs": { 00:33:46.294 "nvme_version": "1.3" 00:33:46.294 }, 00:33:46.294 "ns_data": { 00:33:46.294 "id": 1, 00:33:46.294 "can_share": true 00:33:46.294 } 00:33:46.294 } 00:33:46.294 ], 00:33:46.294 "mp_policy": "active_passive" 00:33:46.294 } 00:33:46.294 } 00:33:46.294 ] 00:33:46.294 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=748930 00:33:46.294 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:46.294 18:29:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:46.552 Running I/O for 10 seconds... 00:33:47.485 Latency(us) 00:33:47.485 [2024-11-26T17:29:35.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.485 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:33:47.485 [2024-11-26T17:29:35.496Z] =================================================================================================================== 00:33:47.485 [2024-11-26T17:29:35.496Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:33:47.485 00:33:48.420 18:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:48.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:48.420 Nvme0n1 : 2.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:33:48.420 [2024-11-26T17:29:36.431Z] =================================================================================================================== 00:33:48.420 [2024-11-26T17:29:36.431Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:33:48.420 00:33:48.678 true 00:33:48.678 18:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:48.678 18:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:48.936 18:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:48.936 18:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:48.936 18:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 748930 00:33:49.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:49.502 Nvme0n1 : 3.00 15197.67 59.37 0.00 0.00 0.00 0.00 0.00 00:33:49.502 [2024-11-26T17:29:37.513Z] =================================================================================================================== 00:33:49.502 [2024-11-26T17:29:37.513Z] Total : 15197.67 59.37 0.00 0.00 0.00 0.00 0.00 00:33:49.502 00:33:50.435 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:50.435 Nvme0n1 : 4.00 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:33:50.435 [2024-11-26T17:29:38.446Z] =================================================================================================================== 00:33:50.435 [2024-11-26T17:29:38.446Z] Total : 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:33:50.435 00:33:51.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:51.369 Nvme0n1 : 5.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:33:51.369 [2024-11-26T17:29:39.380Z] =================================================================================================================== 00:33:51.369 [2024-11-26T17:29:39.380Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:33:51.369 00:33:52.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:52.381 Nvme0n1 : 6.00 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:33:52.381 [2024-11-26T17:29:40.392Z] =================================================================================================================== 00:33:52.381 [2024-11-26T17:29:40.392Z] Total : 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:33:52.381 00:33:53.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:53.755 Nvme0n1 : 7.00 15426.29 60.26 0.00 0.00 0.00 0.00 0.00 00:33:53.755 [2024-11-26T17:29:41.766Z] =================================================================================================================== 00:33:53.755 [2024-11-26T17:29:41.766Z] Total : 15426.29 60.26 0.00 0.00 0.00 0.00 0.00 00:33:53.755 00:33:54.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:54.689 Nvme0n1 : 8.00 15466.50 60.42 0.00 0.00 0.00 0.00 0.00 00:33:54.689 [2024-11-26T17:29:42.700Z] =================================================================================================================== 00:33:54.689 [2024-11-26T17:29:42.700Z] Total : 15466.50 60.42 0.00 0.00 0.00 0.00 0.00 00:33:54.689 00:33:55.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:55.621 Nvme0n1 : 9.00 15511.89 60.59 0.00 0.00 0.00 0.00 0.00 00:33:55.621 [2024-11-26T17:29:43.633Z] =================================================================================================================== 00:33:55.622 [2024-11-26T17:29:43.633Z] Total : 15511.89 60.59 0.00 0.00 0.00 0.00 0.00 00:33:55.622 00:33:56.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:56.556 Nvme0n1 : 10.00 15541.90 60.71 0.00 0.00 0.00 0.00 0.00 00:33:56.556 [2024-11-26T17:29:44.567Z] =================================================================================================================== 00:33:56.556 [2024-11-26T17:29:44.567Z] Total : 15541.90 60.71 0.00 0.00 0.00 0.00 0.00 00:33:56.556 00:33:56.556 00:33:56.556 Latency(us) 00:33:56.556 [2024-11-26T17:29:44.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:56.556 Nvme0n1 : 10.00 15542.43 60.71 0.00 0.00 8230.14 4247.70 18058.81 00:33:56.556 [2024-11-26T17:29:44.567Z] =================================================================================================================== 00:33:56.556 [2024-11-26T17:29:44.567Z] Total : 15542.43 60.71 0.00 0.00 8230.14 4247.70 18058.81 00:33:56.556 { 00:33:56.556 "results": [ 00:33:56.556 { 00:33:56.556 "job": "Nvme0n1", 00:33:56.556 "core_mask": "0x2", 00:33:56.556 "workload": "randwrite", 00:33:56.556 "status": "finished", 00:33:56.556 "queue_depth": 128, 00:33:56.556 "io_size": 4096, 00:33:56.556 "runtime": 10.003779, 00:33:56.556 "iops": 15542.42651701922, 00:33:56.556 "mibps": 60.712603582106325, 00:33:56.556 "io_failed": 0, 00:33:56.556 "io_timeout": 0, 00:33:56.556 "avg_latency_us": 8230.136958605217, 00:33:56.556 "min_latency_us": 4247.7037037037035, 00:33:56.556 "max_latency_us": 18058.80888888889 00:33:56.556 } 00:33:56.556 ], 00:33:56.556 "core_count": 1 00:33:56.556 } 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 748798 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 748798 ']' 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 748798 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 748798 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 748798' 00:33:56.556 killing process with pid 748798 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 748798 00:33:56.556 Received shutdown signal, test time was about 10.000000 seconds 00:33:56.556 00:33:56.556 Latency(us) 00:33:56.556 [2024-11-26T17:29:44.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.556 [2024-11-26T17:29:44.567Z] =================================================================================================================== 00:33:56.556 [2024-11-26T17:29:44.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:56.556 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 748798 00:33:56.816 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:57.075 18:29:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:57.334 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:57.334 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:57.592 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:57.592 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:57.592 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 746199 00:33:57.592 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 746199 00:33:57.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 746199 Killed "${NVMF_APP[@]}" "$@" 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=750274 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 750274 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 750274 ']' 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:57.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:57.593 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:57.851 [2024-11-26 18:29:45.646984] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:57.851 [2024-11-26 18:29:45.648017] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:33:57.851 [2024-11-26 18:29:45.648062] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:57.851 [2024-11-26 18:29:45.717624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.851 [2024-11-26 18:29:45.771926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:57.851 [2024-11-26 18:29:45.771982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:57.851 [2024-11-26 18:29:45.772013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:57.851 [2024-11-26 18:29:45.772024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:57.851 [2024-11-26 18:29:45.772035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:57.851 [2024-11-26 18:29:45.772582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.851 [2024-11-26 18:29:45.858388] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:57.851 [2024-11-26 18:29:45.858768] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:58.110 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.110 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:58.110 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:58.110 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:58.110 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:58.110 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.110 18:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:58.368 [2024-11-26 18:29:46.163317] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:58.368 [2024-11-26 18:29:46.163458] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:58.368 [2024-11-26 18:29:46.163510] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:58.368 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:58.368 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev aa6ba715-7a09-4dcf-97c0-b33f561015f3 00:33:58.368 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=aa6ba715-7a09-4dcf-97c0-b33f561015f3 00:33:58.368 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:58.368 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:58.368 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:58.368 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:58.368 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:58.626 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aa6ba715-7a09-4dcf-97c0-b33f561015f3 -t 2000 00:33:58.884 [ 00:33:58.884 { 00:33:58.884 "name": "aa6ba715-7a09-4dcf-97c0-b33f561015f3", 00:33:58.884 "aliases": [ 00:33:58.884 "lvs/lvol" 00:33:58.884 ], 00:33:58.884 "product_name": "Logical Volume", 00:33:58.884 "block_size": 4096, 00:33:58.884 "num_blocks": 38912, 00:33:58.884 "uuid": "aa6ba715-7a09-4dcf-97c0-b33f561015f3", 00:33:58.884 "assigned_rate_limits": { 00:33:58.884 "rw_ios_per_sec": 0, 00:33:58.884 "rw_mbytes_per_sec": 0, 00:33:58.884 "r_mbytes_per_sec": 0, 00:33:58.884 "w_mbytes_per_sec": 0 00:33:58.884 }, 00:33:58.884 "claimed": false, 00:33:58.884 "zoned": false, 00:33:58.884 "supported_io_types": { 00:33:58.884 "read": true, 00:33:58.884 "write": true, 00:33:58.884 "unmap": true, 00:33:58.884 "flush": false, 00:33:58.884 "reset": true, 00:33:58.884 "nvme_admin": false, 00:33:58.884 "nvme_io": false, 00:33:58.884 "nvme_io_md": false, 00:33:58.884 "write_zeroes": true, 00:33:58.884 "zcopy": false, 00:33:58.884 "get_zone_info": false, 00:33:58.884 "zone_management": false, 00:33:58.884 "zone_append": false, 00:33:58.884 "compare": false, 00:33:58.884 "compare_and_write": false, 00:33:58.884 "abort": false, 00:33:58.884 "seek_hole": true, 00:33:58.884 "seek_data": true, 00:33:58.884 "copy": false, 00:33:58.884 "nvme_iov_md": false 00:33:58.884 }, 00:33:58.884 "driver_specific": { 00:33:58.884 "lvol": { 00:33:58.884 "lvol_store_uuid": "1b509fa2-4820-4a4f-b40b-ce1a378f7393", 00:33:58.884 "base_bdev": "aio_bdev", 00:33:58.884 "thin_provision": false, 00:33:58.884 "num_allocated_clusters": 38, 00:33:58.884 "snapshot": false, 00:33:58.884 "clone": false, 00:33:58.884 "esnap_clone": false 00:33:58.884 } 00:33:58.884 } 00:33:58.884 } 00:33:58.884 ] 00:33:58.884 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:58.884 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:58.884 18:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:59.143 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:59.143 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:59.143 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:59.401 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:59.401 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:59.660 [2024-11-26 18:29:47.545072] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:59.660 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:33:59.918 request: 00:33:59.918 { 00:33:59.918 "uuid": "1b509fa2-4820-4a4f-b40b-ce1a378f7393", 00:33:59.918 "method": "bdev_lvol_get_lvstores", 00:33:59.918 "req_id": 1 00:33:59.918 } 00:33:59.918 Got JSON-RPC error response 00:33:59.918 response: 00:33:59.918 { 00:33:59.918 "code": -19, 00:33:59.918 "message": "No such device" 00:33:59.918 } 00:33:59.918 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:59.918 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:59.918 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:59.918 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:59.918 18:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:34:00.177 aio_bdev 00:34:00.177 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev aa6ba715-7a09-4dcf-97c0-b33f561015f3 00:34:00.177 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=aa6ba715-7a09-4dcf-97c0-b33f561015f3 00:34:00.177 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:00.177 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:34:00.177 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:00.177 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:00.177 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:00.745 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b aa6ba715-7a09-4dcf-97c0-b33f561015f3 -t 2000 00:34:00.745 [ 00:34:00.745 { 00:34:00.745 "name": "aa6ba715-7a09-4dcf-97c0-b33f561015f3", 00:34:00.745 "aliases": [ 00:34:00.745 "lvs/lvol" 00:34:00.745 ], 00:34:00.745 "product_name": "Logical Volume", 00:34:00.745 "block_size": 4096, 00:34:00.745 "num_blocks": 38912, 00:34:00.745 "uuid": "aa6ba715-7a09-4dcf-97c0-b33f561015f3", 00:34:00.745 "assigned_rate_limits": { 00:34:00.745 "rw_ios_per_sec": 0, 00:34:00.745 "rw_mbytes_per_sec": 0, 00:34:00.745 "r_mbytes_per_sec": 0, 00:34:00.746 "w_mbytes_per_sec": 0 00:34:00.746 }, 00:34:00.746 "claimed": false, 00:34:00.746 "zoned": false, 00:34:00.746 "supported_io_types": { 00:34:00.746 "read": true, 00:34:00.746 "write": true, 00:34:00.746 "unmap": true, 00:34:00.746 "flush": false, 00:34:00.746 "reset": true, 00:34:00.746 "nvme_admin": false, 00:34:00.746 "nvme_io": false, 00:34:00.746 "nvme_io_md": false, 00:34:00.746 "write_zeroes": true, 00:34:00.746 "zcopy": false, 00:34:00.746 "get_zone_info": false, 00:34:00.746 "zone_management": false, 00:34:00.746 "zone_append": false, 00:34:00.746 "compare": false, 00:34:00.746 "compare_and_write": false, 00:34:00.746 "abort": false, 00:34:00.746 "seek_hole": true, 00:34:00.746 "seek_data": true, 00:34:00.746 "copy": false, 00:34:00.746 "nvme_iov_md": false 00:34:00.746 }, 00:34:00.746 "driver_specific": { 00:34:00.746 "lvol": { 00:34:00.746 "lvol_store_uuid": "1b509fa2-4820-4a4f-b40b-ce1a378f7393", 00:34:00.746 "base_bdev": "aio_bdev", 00:34:00.746 "thin_provision": false, 00:34:00.746 "num_allocated_clusters": 38, 00:34:00.746 "snapshot": false, 00:34:00.746 "clone": false, 00:34:00.746 "esnap_clone": false 00:34:00.746 } 00:34:00.746 } 00:34:00.746 } 00:34:00.746 ] 00:34:01.005 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:34:01.005 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:34:01.005 18:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:34:01.264 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:34:01.264 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:34:01.264 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:34:01.522 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:34:01.522 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete aa6ba715-7a09-4dcf-97c0-b33f561015f3 00:34:01.780 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b509fa2-4820-4a4f-b40b-ce1a378f7393 00:34:02.039 18:29:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:34:02.297 00:34:02.297 real 0m19.769s 00:34:02.297 user 0m36.886s 00:34:02.297 sys 0m4.596s 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:34:02.297 ************************************ 00:34:02.297 END TEST lvs_grow_dirty 00:34:02.297 ************************************ 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:34:02.297 nvmf_trace.0 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.297 rmmod nvme_tcp 00:34:02.297 rmmod nvme_fabrics 00:34:02.297 rmmod nvme_keyring 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 750274 ']' 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 750274 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 750274 ']' 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 750274 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.297 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 750274 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 750274' 00:34:02.556 killing process with pid 750274 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 750274 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 750274 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:02.556 18:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.093 00:34:05.093 real 0m43.509s 00:34:05.093 user 0m56.454s 00:34:05.093 sys 0m8.505s 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:34:05.093 ************************************ 00:34:05.093 END TEST nvmf_lvs_grow 00:34:05.093 ************************************ 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:05.093 ************************************ 00:34:05.093 START TEST nvmf_bdev_io_wait 00:34:05.093 ************************************ 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:34:05.093 * Looking for test storage... 00:34:05.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.093 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.094 --rc genhtml_branch_coverage=1 00:34:05.094 --rc genhtml_function_coverage=1 00:34:05.094 --rc genhtml_legend=1 00:34:05.094 --rc geninfo_all_blocks=1 00:34:05.094 --rc geninfo_unexecuted_blocks=1 00:34:05.094 00:34:05.094 ' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.094 --rc genhtml_branch_coverage=1 00:34:05.094 --rc genhtml_function_coverage=1 00:34:05.094 --rc genhtml_legend=1 00:34:05.094 --rc geninfo_all_blocks=1 00:34:05.094 --rc geninfo_unexecuted_blocks=1 00:34:05.094 00:34:05.094 ' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.094 --rc genhtml_branch_coverage=1 00:34:05.094 --rc genhtml_function_coverage=1 00:34:05.094 --rc genhtml_legend=1 00:34:05.094 --rc geninfo_all_blocks=1 00:34:05.094 --rc geninfo_unexecuted_blocks=1 00:34:05.094 00:34:05.094 ' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:05.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.094 --rc genhtml_branch_coverage=1 00:34:05.094 --rc genhtml_function_coverage=1 00:34:05.094 --rc genhtml_legend=1 00:34:05.094 --rc geninfo_all_blocks=1 00:34:05.094 --rc geninfo_unexecuted_blocks=1 00:34:05.094 00:34:05.094 ' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.094 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.095 18:29:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:06.997 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:06.998 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:06.998 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:06.998 Found net devices under 0000:09:00.0: cvl_0_0 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:06.998 Found net devices under 0000:09:00.1: cvl_0_1 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:06.998 18:29:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:34:07.257 00:34:07.257 --- 10.0.0.2 ping statistics --- 00:34:07.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.257 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:34:07.257 00:34:07.257 --- 10.0.0.1 ping statistics --- 00:34:07.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.257 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=752795 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 752795 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 752795 ']' 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:07.257 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.257 [2024-11-26 18:29:55.197799] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:07.257 [2024-11-26 18:29:55.198860] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:07.257 [2024-11-26 18:29:55.198937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.516 [2024-11-26 18:29:55.272151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:07.516 [2024-11-26 18:29:55.328046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.516 [2024-11-26 18:29:55.328099] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.516 [2024-11-26 18:29:55.328122] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.516 [2024-11-26 18:29:55.328133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.516 [2024-11-26 18:29:55.328142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.516 [2024-11-26 18:29:55.329727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.516 [2024-11-26 18:29:55.329834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:07.516 [2024-11-26 18:29:55.329926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:07.516 [2024-11-26 18:29:55.329935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.516 [2024-11-26 18:29:55.330524] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.516 [2024-11-26 18:29:55.518391] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:07.516 [2024-11-26 18:29:55.518623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:07.516 [2024-11-26 18:29:55.519417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:07.516 [2024-11-26 18:29:55.520150] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.516 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.774 [2024-11-26 18:29:55.526750] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.774 Malloc0 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:07.774 [2024-11-26 18:29:55.582893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=752828 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=752830 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:07.774 { 00:34:07.774 "params": { 00:34:07.774 "name": "Nvme$subsystem", 00:34:07.774 "trtype": "$TEST_TRANSPORT", 00:34:07.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:07.774 "adrfam": "ipv4", 00:34:07.774 "trsvcid": "$NVMF_PORT", 00:34:07.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:07.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:07.774 "hdgst": ${hdgst:-false}, 00:34:07.774 "ddgst": ${ddgst:-false} 00:34:07.774 }, 00:34:07.774 "method": "bdev_nvme_attach_controller" 00:34:07.774 } 00:34:07.774 EOF 00:34:07.774 )") 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=752832 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:07.774 { 00:34:07.774 "params": { 00:34:07.774 "name": "Nvme$subsystem", 00:34:07.774 "trtype": "$TEST_TRANSPORT", 00:34:07.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:07.774 "adrfam": "ipv4", 00:34:07.774 "trsvcid": "$NVMF_PORT", 00:34:07.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:07.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:07.774 "hdgst": ${hdgst:-false}, 00:34:07.774 "ddgst": ${ddgst:-false} 00:34:07.774 }, 00:34:07.774 "method": "bdev_nvme_attach_controller" 00:34:07.774 } 00:34:07.774 EOF 00:34:07.774 )") 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=752835 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:07.774 { 00:34:07.774 "params": { 00:34:07.774 "name": "Nvme$subsystem", 00:34:07.774 "trtype": "$TEST_TRANSPORT", 00:34:07.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:07.774 "adrfam": "ipv4", 00:34:07.774 "trsvcid": "$NVMF_PORT", 00:34:07.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:07.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:07.774 "hdgst": ${hdgst:-false}, 00:34:07.774 "ddgst": ${ddgst:-false} 00:34:07.774 }, 00:34:07.774 "method": "bdev_nvme_attach_controller" 00:34:07.774 } 00:34:07.774 EOF 00:34:07.774 )") 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:07.774 { 00:34:07.774 "params": { 00:34:07.774 "name": "Nvme$subsystem", 00:34:07.774 "trtype": "$TEST_TRANSPORT", 00:34:07.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:07.774 "adrfam": "ipv4", 00:34:07.774 "trsvcid": "$NVMF_PORT", 00:34:07.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:07.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:07.774 "hdgst": ${hdgst:-false}, 00:34:07.774 "ddgst": ${ddgst:-false} 00:34:07.774 }, 00:34:07.774 "method": "bdev_nvme_attach_controller" 00:34:07.774 } 00:34:07.774 EOF 00:34:07.774 )") 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 752828 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:07.774 "params": { 00:34:07.774 "name": "Nvme1", 00:34:07.774 "trtype": "tcp", 00:34:07.774 "traddr": "10.0.0.2", 00:34:07.774 "adrfam": "ipv4", 00:34:07.774 "trsvcid": "4420", 00:34:07.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:07.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:07.774 "hdgst": false, 00:34:07.774 "ddgst": false 00:34:07.774 }, 00:34:07.774 "method": "bdev_nvme_attach_controller" 00:34:07.774 }' 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:07.774 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:07.774 "params": { 00:34:07.774 "name": "Nvme1", 00:34:07.774 "trtype": "tcp", 00:34:07.774 "traddr": "10.0.0.2", 00:34:07.774 "adrfam": "ipv4", 00:34:07.775 "trsvcid": "4420", 00:34:07.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:07.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:07.775 "hdgst": false, 00:34:07.775 "ddgst": false 00:34:07.775 }, 00:34:07.775 "method": "bdev_nvme_attach_controller" 00:34:07.775 }' 00:34:07.775 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:07.775 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:07.775 "params": { 00:34:07.775 "name": "Nvme1", 00:34:07.775 "trtype": "tcp", 00:34:07.775 "traddr": "10.0.0.2", 00:34:07.775 "adrfam": "ipv4", 00:34:07.775 "trsvcid": "4420", 00:34:07.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:07.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:07.775 "hdgst": false, 00:34:07.775 "ddgst": false 00:34:07.775 }, 00:34:07.775 "method": "bdev_nvme_attach_controller" 00:34:07.775 }' 00:34:07.775 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:07.775 18:29:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:07.775 "params": { 00:34:07.775 "name": "Nvme1", 00:34:07.775 "trtype": "tcp", 00:34:07.775 "traddr": "10.0.0.2", 00:34:07.775 "adrfam": "ipv4", 00:34:07.775 "trsvcid": "4420", 00:34:07.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:07.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:07.775 "hdgst": false, 00:34:07.775 "ddgst": false 00:34:07.775 }, 00:34:07.775 "method": "bdev_nvme_attach_controller" 00:34:07.775 }' 00:34:07.775 [2024-11-26 18:29:55.632822] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:07.775 [2024-11-26 18:29:55.632822] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:07.775 [2024-11-26 18:29:55.632927] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 18:29:55.632927] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:07.775 --proc-type=auto ] 00:34:07.775 [2024-11-26 18:29:55.633054] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:07.775 [2024-11-26 18:29:55.633053] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:07.775 [2024-11-26 18:29:55.633128] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 18:29:55.633128] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:07.775 --proc-type=auto ] 00:34:08.032 [2024-11-26 18:29:55.833250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.032 [2024-11-26 18:29:55.888699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:08.032 [2024-11-26 18:29:55.959595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.032 [2024-11-26 18:29:56.013727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.032 [2024-11-26 18:29:56.016907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:08.289 [2024-11-26 18:29:56.065679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:08.289 [2024-11-26 18:29:56.081161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.289 [2024-11-26 18:29:56.133021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:08.289 Running I/O for 1 seconds... 00:34:08.289 Running I/O for 1 seconds... 00:34:08.546 Running I/O for 1 seconds... 00:34:08.546 Running I/O for 1 seconds... 00:34:09.480 6681.00 IOPS, 26.10 MiB/s 00:34:09.481 Latency(us) 00:34:09.481 [2024-11-26T17:29:57.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.481 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:09.481 Nvme1n1 : 1.02 6711.32 26.22 0.00 0.00 18936.84 2123.85 30680.56 00:34:09.481 [2024-11-26T17:29:57.492Z] =================================================================================================================== 00:34:09.481 [2024-11-26T17:29:57.492Z] Total : 6711.32 26.22 0.00 0.00 18936.84 2123.85 30680.56 00:34:09.481 9749.00 IOPS, 38.08 MiB/s 00:34:09.481 Latency(us) 00:34:09.481 [2024-11-26T17:29:57.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.481 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:09.481 Nvme1n1 : 1.01 9792.76 38.25 0.00 0.00 13008.43 4660.34 18155.90 00:34:09.481 [2024-11-26T17:29:57.492Z] =================================================================================================================== 00:34:09.481 [2024-11-26T17:29:57.492Z] Total : 9792.76 38.25 0.00 0.00 13008.43 4660.34 18155.90 00:34:09.481 6595.00 IOPS, 25.76 MiB/s [2024-11-26T17:29:57.492Z] 162968.00 IOPS, 636.59 MiB/s 00:34:09.481 Latency(us) 00:34:09.481 [2024-11-26T17:29:57.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.481 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:09.481 Nvme1n1 : 1.01 6723.17 26.26 0.00 0.00 18985.93 4077.80 38253.61 00:34:09.481 [2024-11-26T17:29:57.492Z] =================================================================================================================== 00:34:09.481 [2024-11-26T17:29:57.492Z] Total : 6723.17 26.26 0.00 0.00 18985.93 4077.80 38253.61 00:34:09.481 00:34:09.481 Latency(us) 00:34:09.481 [2024-11-26T17:29:57.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.481 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:09.481 Nvme1n1 : 1.00 162663.20 635.40 0.00 0.00 782.57 304.92 1868.99 00:34:09.481 [2024-11-26T17:29:57.492Z] =================================================================================================================== 00:34:09.481 [2024-11-26T17:29:57.492Z] Total : 162663.20 635.40 0.00 0.00 782.57 304.92 1868.99 00:34:09.481 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 752830 00:34:09.481 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 752832 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 752835 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:09.739 rmmod nvme_tcp 00:34:09.739 rmmod nvme_fabrics 00:34:09.739 rmmod nvme_keyring 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 752795 ']' 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 752795 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 752795 ']' 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 752795 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 752795 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 752795' 00:34:09.739 killing process with pid 752795 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 752795 00:34:09.739 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 752795 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.998 18:29:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:11.898 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:11.898 00:34:11.898 real 0m7.231s 00:34:11.898 user 0m14.117s 00:34:11.898 sys 0m4.005s 00:34:11.898 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.898 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:11.898 ************************************ 00:34:11.898 END TEST nvmf_bdev_io_wait 00:34:11.898 ************************************ 00:34:12.156 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:12.156 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:12.156 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.156 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:12.156 ************************************ 00:34:12.156 START TEST nvmf_queue_depth 00:34:12.156 ************************************ 00:34:12.156 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:12.156 * Looking for test storage... 00:34:12.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:12.156 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:12.156 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:34:12.156 18:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:12.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.156 --rc genhtml_branch_coverage=1 00:34:12.156 --rc genhtml_function_coverage=1 00:34:12.156 --rc genhtml_legend=1 00:34:12.156 --rc geninfo_all_blocks=1 00:34:12.156 --rc geninfo_unexecuted_blocks=1 00:34:12.156 00:34:12.156 ' 00:34:12.156 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:12.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.156 --rc genhtml_branch_coverage=1 00:34:12.156 --rc genhtml_function_coverage=1 00:34:12.156 --rc genhtml_legend=1 00:34:12.156 --rc geninfo_all_blocks=1 00:34:12.156 --rc geninfo_unexecuted_blocks=1 00:34:12.156 00:34:12.157 ' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:12.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.157 --rc genhtml_branch_coverage=1 00:34:12.157 --rc genhtml_function_coverage=1 00:34:12.157 --rc genhtml_legend=1 00:34:12.157 --rc geninfo_all_blocks=1 00:34:12.157 --rc geninfo_unexecuted_blocks=1 00:34:12.157 00:34:12.157 ' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:12.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:12.157 --rc genhtml_branch_coverage=1 00:34:12.157 --rc genhtml_function_coverage=1 00:34:12.157 --rc genhtml_legend=1 00:34:12.157 --rc geninfo_all_blocks=1 00:34:12.157 --rc geninfo_unexecuted_blocks=1 00:34:12.157 00:34:12.157 ' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:12.157 18:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:14.083 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:14.084 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:14.084 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:14.084 Found net devices under 0000:09:00.0: cvl_0_0 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:14.084 Found net devices under 0000:09:00.1: cvl_0_1 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.084 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:14.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:34:14.343 00:34:14.343 --- 10.0.0.2 ping statistics --- 00:34:14.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.343 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:34:14.343 00:34:14.343 --- 10.0.0.1 ping statistics --- 00:34:14.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.343 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=755163 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 755163 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 755163 ']' 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.343 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.343 [2024-11-26 18:30:02.265439] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:14.343 [2024-11-26 18:30:02.266499] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:14.343 [2024-11-26 18:30:02.266572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.343 [2024-11-26 18:30:02.344415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.602 [2024-11-26 18:30:02.404527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.602 [2024-11-26 18:30:02.404600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.602 [2024-11-26 18:30:02.404615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.602 [2024-11-26 18:30:02.404626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.602 [2024-11-26 18:30:02.404636] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.602 [2024-11-26 18:30:02.405248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.602 [2024-11-26 18:30:02.501257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:14.602 [2024-11-26 18:30:02.501585] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.602 [2024-11-26 18:30:02.549909] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.602 Malloc0 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.602 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.861 [2024-11-26 18:30:02.617960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=755293 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 755293 /var/tmp/bdevperf.sock 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 755293 ']' 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:14.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:14.861 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:14.861 [2024-11-26 18:30:02.665889] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:14.861 [2024-11-26 18:30:02.665950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid755293 ] 00:34:14.861 [2024-11-26 18:30:02.733181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.861 [2024-11-26 18:30:02.791660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.119 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:15.119 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:15.119 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:15.119 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.119 18:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:15.119 NVMe0n1 00:34:15.119 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.119 18:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:15.119 Running I/O for 10 seconds... 00:34:17.419 8192.00 IOPS, 32.00 MiB/s [2024-11-26T17:30:06.362Z] 8425.00 IOPS, 32.91 MiB/s [2024-11-26T17:30:07.296Z] 8533.33 IOPS, 33.33 MiB/s [2024-11-26T17:30:08.231Z] 8490.00 IOPS, 33.16 MiB/s [2024-11-26T17:30:09.165Z] 8599.00 IOPS, 33.59 MiB/s [2024-11-26T17:30:10.536Z] 8571.33 IOPS, 33.48 MiB/s [2024-11-26T17:30:11.470Z] 8629.00 IOPS, 33.71 MiB/s [2024-11-26T17:30:12.403Z] 8628.00 IOPS, 33.70 MiB/s [2024-11-26T17:30:13.337Z] 8644.78 IOPS, 33.77 MiB/s [2024-11-26T17:30:13.337Z] 8656.30 IOPS, 33.81 MiB/s 00:34:25.326 Latency(us) 00:34:25.326 [2024-11-26T17:30:13.337Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.326 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:25.326 Verification LBA range: start 0x0 length 0x4000 00:34:25.326 NVMe0n1 : 10.08 8685.42 33.93 0.00 0.00 117327.54 22330.79 70293.43 00:34:25.326 [2024-11-26T17:30:13.337Z] =================================================================================================================== 00:34:25.326 [2024-11-26T17:30:13.337Z] Total : 8685.42 33.93 0.00 0.00 117327.54 22330.79 70293.43 00:34:25.326 { 00:34:25.326 "results": [ 00:34:25.326 { 00:34:25.326 "job": "NVMe0n1", 00:34:25.326 "core_mask": "0x1", 00:34:25.326 "workload": "verify", 00:34:25.326 "status": "finished", 00:34:25.326 "verify_range": { 00:34:25.326 "start": 0, 00:34:25.326 "length": 16384 00:34:25.326 }, 00:34:25.326 "queue_depth": 1024, 00:34:25.326 "io_size": 4096, 00:34:25.326 "runtime": 10.084368, 00:34:25.326 "iops": 8685.422824712467, 00:34:25.326 "mibps": 33.92743290903307, 00:34:25.326 "io_failed": 0, 00:34:25.326 "io_timeout": 0, 00:34:25.326 "avg_latency_us": 117327.53823609035, 00:34:25.326 "min_latency_us": 22330.785185185185, 00:34:25.326 "max_latency_us": 70293.42814814814 00:34:25.326 } 00:34:25.326 ], 00:34:25.326 "core_count": 1 00:34:25.326 } 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 755293 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 755293 ']' 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 755293 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755293 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755293' 00:34:25.327 killing process with pid 755293 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 755293 00:34:25.327 Received shutdown signal, test time was about 10.000000 seconds 00:34:25.327 00:34:25.327 Latency(us) 00:34:25.327 [2024-11-26T17:30:13.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.327 [2024-11-26T17:30:13.338Z] =================================================================================================================== 00:34:25.327 [2024-11-26T17:30:13.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:25.327 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 755293 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:25.585 rmmod nvme_tcp 00:34:25.585 rmmod nvme_fabrics 00:34:25.585 rmmod nvme_keyring 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 755163 ']' 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 755163 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 755163 ']' 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 755163 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:25.585 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 755163 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 755163' 00:34:25.844 killing process with pid 755163 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 755163 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 755163 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:25.844 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:26.105 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:26.105 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:26.105 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:26.105 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:26.105 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:26.105 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.105 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:26.105 18:30:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:28.014 00:34:28.014 real 0m15.962s 00:34:28.014 user 0m22.125s 00:34:28.014 sys 0m3.308s 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:28.014 ************************************ 00:34:28.014 END TEST nvmf_queue_depth 00:34:28.014 ************************************ 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:28.014 ************************************ 00:34:28.014 START TEST nvmf_target_multipath 00:34:28.014 ************************************ 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:28.014 * Looking for test storage... 00:34:28.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:28.014 18:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:28.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.273 --rc genhtml_branch_coverage=1 00:34:28.273 --rc genhtml_function_coverage=1 00:34:28.273 --rc genhtml_legend=1 00:34:28.273 --rc geninfo_all_blocks=1 00:34:28.273 --rc geninfo_unexecuted_blocks=1 00:34:28.273 00:34:28.273 ' 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:28.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.273 --rc genhtml_branch_coverage=1 00:34:28.273 --rc genhtml_function_coverage=1 00:34:28.273 --rc genhtml_legend=1 00:34:28.273 --rc geninfo_all_blocks=1 00:34:28.273 --rc geninfo_unexecuted_blocks=1 00:34:28.273 00:34:28.273 ' 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:28.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.273 --rc genhtml_branch_coverage=1 00:34:28.273 --rc genhtml_function_coverage=1 00:34:28.273 --rc genhtml_legend=1 00:34:28.273 --rc geninfo_all_blocks=1 00:34:28.273 --rc geninfo_unexecuted_blocks=1 00:34:28.273 00:34:28.273 ' 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:28.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:28.273 --rc genhtml_branch_coverage=1 00:34:28.273 --rc genhtml_function_coverage=1 00:34:28.273 --rc genhtml_legend=1 00:34:28.273 --rc geninfo_all_blocks=1 00:34:28.273 --rc geninfo_unexecuted_blocks=1 00:34:28.273 00:34:28.273 ' 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:28.273 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:28.274 18:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:30.175 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:30.434 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:30.434 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:30.434 Found net devices under 0000:09:00.0: cvl_0_0 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:30.434 Found net devices under 0000:09:00.1: cvl_0_1 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:30.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:34:30.434 00:34:30.434 --- 10.0.0.2 ping statistics --- 00:34:30.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.434 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:34:30.434 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:34:30.435 00:34:30.435 --- 10.0.0.1 ping statistics --- 00:34:30.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.435 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:30.435 only one NIC for nvmf test 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:30.435 rmmod nvme_tcp 00:34:30.435 rmmod nvme_fabrics 00:34:30.435 rmmod nvme_keyring 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.435 18:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.967 00:34:32.967 real 0m4.554s 00:34:32.967 user 0m0.925s 00:34:32.967 sys 0m1.627s 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:32.967 ************************************ 00:34:32.967 END TEST nvmf_target_multipath 00:34:32.967 ************************************ 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:32.967 ************************************ 00:34:32.967 START TEST nvmf_zcopy 00:34:32.967 ************************************ 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:32.967 * Looking for test storage... 00:34:32.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:32.967 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:32.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.968 --rc genhtml_branch_coverage=1 00:34:32.968 --rc genhtml_function_coverage=1 00:34:32.968 --rc genhtml_legend=1 00:34:32.968 --rc geninfo_all_blocks=1 00:34:32.968 --rc geninfo_unexecuted_blocks=1 00:34:32.968 00:34:32.968 ' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:32.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.968 --rc genhtml_branch_coverage=1 00:34:32.968 --rc genhtml_function_coverage=1 00:34:32.968 --rc genhtml_legend=1 00:34:32.968 --rc geninfo_all_blocks=1 00:34:32.968 --rc geninfo_unexecuted_blocks=1 00:34:32.968 00:34:32.968 ' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:32.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.968 --rc genhtml_branch_coverage=1 00:34:32.968 --rc genhtml_function_coverage=1 00:34:32.968 --rc genhtml_legend=1 00:34:32.968 --rc geninfo_all_blocks=1 00:34:32.968 --rc geninfo_unexecuted_blocks=1 00:34:32.968 00:34:32.968 ' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:32.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.968 --rc genhtml_branch_coverage=1 00:34:32.968 --rc genhtml_function_coverage=1 00:34:32.968 --rc genhtml_legend=1 00:34:32.968 --rc geninfo_all_blocks=1 00:34:32.968 --rc geninfo_unexecuted_blocks=1 00:34:32.968 00:34:32.968 ' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.968 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.969 18:30:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:34.872 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:34.872 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:34.872 Found net devices under 0000:09:00.0: cvl_0_0 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:34.872 Found net devices under 0000:09:00.1: cvl_0_1 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:34.872 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:34.873 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:34:35.131 00:34:35.131 --- 10.0.0.2 ping statistics --- 00:34:35.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.131 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:34:35.131 00:34:35.131 --- 10.0.0.1 ping statistics --- 00:34:35.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.131 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=760866 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 760866 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 760866 ']' 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:35.131 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:35.132 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:35.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:35.132 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:35.132 18:30:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.132 [2024-11-26 18:30:23.011598] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:35.132 [2024-11-26 18:30:23.012718] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:35.132 [2024-11-26 18:30:23.012771] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:35.132 [2024-11-26 18:30:23.085803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.391 [2024-11-26 18:30:23.146570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:35.391 [2024-11-26 18:30:23.146625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:35.391 [2024-11-26 18:30:23.146638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:35.391 [2024-11-26 18:30:23.146648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:35.391 [2024-11-26 18:30:23.146657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:35.391 [2024-11-26 18:30:23.147207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:35.391 [2024-11-26 18:30:23.240871] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:35.391 [2024-11-26 18:30:23.241137] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.391 [2024-11-26 18:30:23.291813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.391 [2024-11-26 18:30:23.307947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.391 malloc0 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:35.391 { 00:34:35.391 "params": { 00:34:35.391 "name": "Nvme$subsystem", 00:34:35.391 "trtype": "$TEST_TRANSPORT", 00:34:35.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:35.391 "adrfam": "ipv4", 00:34:35.391 "trsvcid": "$NVMF_PORT", 00:34:35.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:35.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:35.391 "hdgst": ${hdgst:-false}, 00:34:35.391 "ddgst": ${ddgst:-false} 00:34:35.391 }, 00:34:35.391 "method": "bdev_nvme_attach_controller" 00:34:35.391 } 00:34:35.391 EOF 00:34:35.391 )") 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:35.391 18:30:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:35.391 "params": { 00:34:35.391 "name": "Nvme1", 00:34:35.391 "trtype": "tcp", 00:34:35.391 "traddr": "10.0.0.2", 00:34:35.391 "adrfam": "ipv4", 00:34:35.391 "trsvcid": "4420", 00:34:35.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:35.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:35.391 "hdgst": false, 00:34:35.391 "ddgst": false 00:34:35.391 }, 00:34:35.391 "method": "bdev_nvme_attach_controller" 00:34:35.391 }' 00:34:35.391 [2024-11-26 18:30:23.395954] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:35.391 [2024-11-26 18:30:23.396044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid761011 ] 00:34:35.649 [2024-11-26 18:30:23.468508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.649 [2024-11-26 18:30:23.528411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.907 Running I/O for 10 seconds... 00:34:38.272 5651.00 IOPS, 44.15 MiB/s [2024-11-26T17:30:27.215Z] 5706.50 IOPS, 44.58 MiB/s [2024-11-26T17:30:28.147Z] 5703.67 IOPS, 44.56 MiB/s [2024-11-26T17:30:29.080Z] 5706.75 IOPS, 44.58 MiB/s [2024-11-26T17:30:30.012Z] 5482.80 IOPS, 42.83 MiB/s [2024-11-26T17:30:30.946Z] 5390.50 IOPS, 42.11 MiB/s [2024-11-26T17:30:31.879Z] 5440.57 IOPS, 42.50 MiB/s [2024-11-26T17:30:33.252Z] 5474.75 IOPS, 42.77 MiB/s [2024-11-26T17:30:34.186Z] 5511.11 IOPS, 43.06 MiB/s [2024-11-26T17:30:34.186Z] 5540.00 IOPS, 43.28 MiB/s 00:34:46.175 Latency(us) 00:34:46.175 [2024-11-26T17:30:34.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:46.175 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:46.175 Verification LBA range: start 0x0 length 0x1000 00:34:46.175 Nvme1n1 : 10.01 5541.45 43.29 0.00 0.00 23036.37 694.80 29515.47 00:34:46.175 [2024-11-26T17:30:34.186Z] =================================================================================================================== 00:34:46.175 [2024-11-26T17:30:34.186Z] Total : 5541.45 43.29 0.00 0.00 23036.37 694.80 29515.47 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=762196 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:46.175 { 00:34:46.175 "params": { 00:34:46.175 "name": "Nvme$subsystem", 00:34:46.175 "trtype": "$TEST_TRANSPORT", 00:34:46.175 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:46.175 "adrfam": "ipv4", 00:34:46.175 "trsvcid": "$NVMF_PORT", 00:34:46.175 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:46.175 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:46.175 "hdgst": ${hdgst:-false}, 00:34:46.175 "ddgst": ${ddgst:-false} 00:34:46.175 }, 00:34:46.175 "method": "bdev_nvme_attach_controller" 00:34:46.175 } 00:34:46.175 EOF 00:34:46.175 )") 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:46.175 [2024-11-26 18:30:34.111758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.175 [2024-11-26 18:30:34.111794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:46.175 18:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:46.175 "params": { 00:34:46.175 "name": "Nvme1", 00:34:46.175 "trtype": "tcp", 00:34:46.175 "traddr": "10.0.0.2", 00:34:46.175 "adrfam": "ipv4", 00:34:46.175 "trsvcid": "4420", 00:34:46.175 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:46.175 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:46.175 "hdgst": false, 00:34:46.175 "ddgst": false 00:34:46.175 }, 00:34:46.175 "method": "bdev_nvme_attach_controller" 00:34:46.175 }' 00:34:46.175 [2024-11-26 18:30:34.119694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.175 [2024-11-26 18:30:34.119716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.175 [2024-11-26 18:30:34.127691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.175 [2024-11-26 18:30:34.127711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.175 [2024-11-26 18:30:34.135683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.175 [2024-11-26 18:30:34.135702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.175 [2024-11-26 18:30:34.143677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.175 [2024-11-26 18:30:34.143697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.175 [2024-11-26 18:30:34.150173] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:34:46.175 [2024-11-26 18:30:34.150232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid762196 ] 00:34:46.175 [2024-11-26 18:30:34.151696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.175 [2024-11-26 18:30:34.151724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.175 [2024-11-26 18:30:34.159694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.176 [2024-11-26 18:30:34.159714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.176 [2024-11-26 18:30:34.167716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.176 [2024-11-26 18:30:34.167736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.176 [2024-11-26 18:30:34.175696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.176 [2024-11-26 18:30:34.175715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.176 [2024-11-26 18:30:34.183708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.176 [2024-11-26 18:30:34.183727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.191696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.191715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.199691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.199710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.207691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.207711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.215691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.215710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.219095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:46.435 [2024-11-26 18:30:34.223693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.223713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.231725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.231756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.239701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.239724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.247691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.247710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.255692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.255711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.263690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.263710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.271692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.271711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.279681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.279702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.280751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:46.435 [2024-11-26 18:30:34.287692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.287711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.295705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.295735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.303717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.303749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.311724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.311757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.319720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.319750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.327704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.327735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.335701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.335731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.343717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.343747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.351690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.351709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.359700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.359729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.367703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.367732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.375721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.375752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.383691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.383711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.391691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.391710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.399697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.399720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.407695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.407719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.415679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.415701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.423694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.423717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.431693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.431713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.435 [2024-11-26 18:30:34.439684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.435 [2024-11-26 18:30:34.439706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.447696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.447730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.455690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.455710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.463688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.463712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.471695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.471717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.479695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.479717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.487675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.487696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.495690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.495710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.503691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.503711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.511690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.511710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.519689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.519715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.527693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.527715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.535693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.535727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.543691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.543711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.551690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.551715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.559692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.559712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.567698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.567735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.575703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.575737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.583676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.583696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.591693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.591714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.599692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.599717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.607692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.607713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.615757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.615783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.623703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.623726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 Running I/O for 5 seconds... 00:34:46.694 [2024-11-26 18:30:34.631763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.631791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.646733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.646760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.657518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.657545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.671176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.671202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.681084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.681108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.694 [2024-11-26 18:30:34.693068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.694 [2024-11-26 18:30:34.693093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.710062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.952 [2024-11-26 18:30:34.710088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.728356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.952 [2024-11-26 18:30:34.728382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.738579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.952 [2024-11-26 18:30:34.738605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.754788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.952 [2024-11-26 18:30:34.754813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.764708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.952 [2024-11-26 18:30:34.764733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.776729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.952 [2024-11-26 18:30:34.776753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.787447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.952 [2024-11-26 18:30:34.787473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.798342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.952 [2024-11-26 18:30:34.798368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.952 [2024-11-26 18:30:34.811291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.811340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.820598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.820639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.832183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.832206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.842918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.842941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.855533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.855560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.865178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.865204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.876863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.876887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.887522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.887548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.898460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.898486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.913329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.913354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.922520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.922548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.936809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.936835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.946114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.946139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:46.953 [2024-11-26 18:30:34.958082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:46.953 [2024-11-26 18:30:34.958108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:34.973970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:34.973995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:34.983785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:34.983809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:34.995479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:34.995504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.006233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:35.006257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.019984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:35.020024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.029703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:35.029728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.041867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:35.041892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.057251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:35.057291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.066544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:35.066570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.080526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:35.080553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.090386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.211 [2024-11-26 18:30:35.090412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.211 [2024-11-26 18:30:35.104330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.104371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.114268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.114316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.127946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.127972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.137146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.137171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.148702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.148728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.158860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.158884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.172472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.172498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.182267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.182314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.196348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.196374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.205891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.205916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.212 [2024-11-26 18:30:35.220887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.212 [2024-11-26 18:30:35.220913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.469 [2024-11-26 18:30:35.230291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.469 [2024-11-26 18:30:35.230338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.469 [2024-11-26 18:30:35.246056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.246081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.261230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.261265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.270674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.270699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.284484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.284511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.293897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.293922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.305386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.305412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.320343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.320371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.329209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.329234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.345108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.345133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.354519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.354545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.367679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.367720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.376707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.376732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.388638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.388663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.398878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.398904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.411921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.411961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.421638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.421662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.433457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.433482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.448102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.448128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.457293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.457340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.470 [2024-11-26 18:30:35.472833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.470 [2024-11-26 18:30:35.472857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.481909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.481945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.493538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.493565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.509550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.509576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.519208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.519232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.531185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.531210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.541876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.541901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.556191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.556216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.565641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.565667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.577240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.577265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.594931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.594956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.609509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.609536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.618727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.618752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.633492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.633518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 11730.00 IOPS, 91.64 MiB/s [2024-11-26T17:30:35.739Z] [2024-11-26 18:30:35.643167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.643191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.655077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.655102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.667556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.667584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.677182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.677208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.692940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.692967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.702350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.702377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.717664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.717698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.728 [2024-11-26 18:30:35.727651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.728 [2024-11-26 18:30:35.727677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.739813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.739840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.750797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.750822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.763619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.763645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.773077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.773102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.785002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.785027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.799642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.799669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.808809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.808833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.820354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.820380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.831001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.831025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.986 [2024-11-26 18:30:35.846566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.986 [2024-11-26 18:30:35.846592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.861257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.861283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.870668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.870694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.884410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.884437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.894178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.894203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.909675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.909700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.918774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.918799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.933716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.933741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.943316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.943343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.954932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.954956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.965667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.965692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.979998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.980039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:47.987 [2024-11-26 18:30:35.989853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:47.987 [2024-11-26 18:30:35.989879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.006118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.006143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.021289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.021323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.030444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.030470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.042095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.042119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.055812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.055838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.065714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.065739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.081697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.081723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.092036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.092062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.103999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.104024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.115375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.115400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.127701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.127726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.137274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.137299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.153363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.153387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.162636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.162660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.175085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.175109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.190078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.190103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.206009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.206049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.220294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.220331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.230045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.230069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.244584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.244626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.245 [2024-11-26 18:30:36.254001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.245 [2024-11-26 18:30:36.254026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.503 [2024-11-26 18:30:36.268602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.503 [2024-11-26 18:30:36.268627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.503 [2024-11-26 18:30:36.277893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.503 [2024-11-26 18:30:36.277918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.503 [2024-11-26 18:30:36.292941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.503 [2024-11-26 18:30:36.292965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.503 [2024-11-26 18:30:36.303493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.503 [2024-11-26 18:30:36.303519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.503 [2024-11-26 18:30:36.314437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.314463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.330151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.330176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.345393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.345435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.355008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.355034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.366952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.366976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.377875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.377899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.391655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.391681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.401386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.401413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.413288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.413335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.429326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.429367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.438945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.438969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.450760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.450784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.464704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.464729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.474170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.474194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.488968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.488992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.498718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.498743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.504 [2024-11-26 18:30:36.512480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.504 [2024-11-26 18:30:36.512505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.522144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.522167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.537182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.537206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.555220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.555244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.564759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.564783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.577048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.577072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.593158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.593182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.602283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.602330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.616103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.616143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.625575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.625600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 11710.50 IOPS, 91.49 MiB/s [2024-11-26T17:30:36.773Z] [2024-11-26 18:30:36.637470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.637504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.653481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.653505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.662808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.662832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.677368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.677393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.686945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.686970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.698874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.698898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.713676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.713700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.722973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.722997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.734758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.734783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.749141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.749182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:48.762 [2024-11-26 18:30:36.758763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:48.762 [2024-11-26 18:30:36.758786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.773909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.773935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.783374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.783399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.795115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.795139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.809188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.809214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.818832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.818857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.830730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.830755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.843581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.843610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.853393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.853418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.869281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.869339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.878851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.878890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.894750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.894775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.907053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.907079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.916967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.916991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.928656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.928680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.939206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.939230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.950077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.950101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.963503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.963529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.972772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.972795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.984488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.984514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:36.995248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:36.995271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:37.005781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:37.005804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 [2024-11-26 18:30:37.021493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.020 [2024-11-26 18:30:37.021518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.278 [2024-11-26 18:30:37.031162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.278 [2024-11-26 18:30:37.031187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.042948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.042973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.053554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.053580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.068711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.068736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.078240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.078263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.092427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.092462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.102512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.102538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.116321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.116354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.126027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.126068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.137480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.137507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.153251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.153290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.162726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.162750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.177086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.177112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.186886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.186910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.198391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.198417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.212740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.212766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.222461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.222487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.236479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.236506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.246471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.246496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.260088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.260112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.269694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.269733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.279 [2024-11-26 18:30:37.281169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.279 [2024-11-26 18:30:37.281192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.290783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.290809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.301879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.301904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.317616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.317671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.327055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.327080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.338704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.338728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.351264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.351290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.363999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.364025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.373228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.373252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.385132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.385156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.401120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.401144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.410503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.410529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.424822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.424847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.435406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.435433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.444603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.444628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.456388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.456414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.466968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.466991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.477642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.477666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.493172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.493199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.502801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.502825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.514699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.514723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.527527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.527553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.537 [2024-11-26 18:30:37.537276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.537 [2024-11-26 18:30:37.537324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.549024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.549050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.560079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.560102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.570337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.570378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.585038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.585065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.593778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.593819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.605713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.605737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.622100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.622140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.631884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.631910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 11735.33 IOPS, 91.68 MiB/s [2024-11-26T17:30:37.806Z] [2024-11-26 18:30:37.643569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.643595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.654468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.654495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.670542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.670569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.685805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.685845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.695053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.695079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.707029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.707054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.717713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.795 [2024-11-26 18:30:37.717737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.795 [2024-11-26 18:30:37.733392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.796 [2024-11-26 18:30:37.733419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.796 [2024-11-26 18:30:37.742238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.796 [2024-11-26 18:30:37.742263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.796 [2024-11-26 18:30:37.756780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.796 [2024-11-26 18:30:37.756805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.796 [2024-11-26 18:30:37.766332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.796 [2024-11-26 18:30:37.766359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.796 [2024-11-26 18:30:37.780350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.796 [2024-11-26 18:30:37.780376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.796 [2024-11-26 18:30:37.790226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.796 [2024-11-26 18:30:37.790251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.796 [2024-11-26 18:30:37.804492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:49.796 [2024-11-26 18:30:37.804519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.814166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.814190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.829097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.829121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.838224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.838250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.852655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.852697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.862694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.862719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.877889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.877914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.895319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.895346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.904825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.904849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.916357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.916382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.926373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.926399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.942354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.942378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.957919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.957946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.966899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.966924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.978699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.978725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:37.993098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:37.993132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:38.002441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:38.002468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:38.016866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:38.016890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.054 [2024-11-26 18:30:38.026405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.054 [2024-11-26 18:30:38.026432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.055 [2024-11-26 18:30:38.040005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.055 [2024-11-26 18:30:38.040031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.055 [2024-11-26 18:30:38.049495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.055 [2024-11-26 18:30:38.049523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.055 [2024-11-26 18:30:38.061437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.055 [2024-11-26 18:30:38.061464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.077457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.077483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.086820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.086846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.100399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.100426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.109894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.109920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.121768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.121794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.137674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.137699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.147025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.147049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.158936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.158961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.171756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.171782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.181393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.181419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.193256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.193282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.209211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.209237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.218470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.218505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.233450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.233476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.243095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.243119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.255086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.255111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.266034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.266058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.281722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.281761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.291205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.291229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.303410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.303436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.313 [2024-11-26 18:30:38.314603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.313 [2024-11-26 18:30:38.314628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.329332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.329358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.339261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.339300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.351470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.351496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.362001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.362026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.377050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.377075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.385984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.386008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.397682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.397721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.413413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.413439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.423077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.423101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.434743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.434768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.449439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.449488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.467300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.467331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.477018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.477041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.488602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.488626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.499780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.499804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.510245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.510270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.524235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.524275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.533667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.533692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.545669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.545694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.562536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.562562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.571 [2024-11-26 18:30:38.577204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.571 [2024-11-26 18:30:38.577230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.586709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.586734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.601166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.601192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.619721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.619761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.629976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.630000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 11748.00 IOPS, 91.78 MiB/s [2024-11-26T17:30:38.841Z] [2024-11-26 18:30:38.644767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.644794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.653829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.653854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.665295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.665330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.680228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.680254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.688953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.688984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.700771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.700797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.711532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.711558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.722163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.722188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.734651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.734677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.749398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.749425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.758439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.758466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.772183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.772225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.781548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.781574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.793418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.793445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.809085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.809113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.818621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.818661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:50.830 [2024-11-26 18:30:38.832807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:50.830 [2024-11-26 18:30:38.832832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.841996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.842022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.854099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.854124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.868015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.868057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.877440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.877466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.891870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.891894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.901422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.901448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.912962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.913002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.923098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.923123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.934343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.934386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.946810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.946836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.961962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.962003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.971181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.971206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.982742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.982768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:38.996116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:38.996143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:39.005665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:39.005707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:39.017436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:39.017463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:39.033052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:39.033078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:39.042898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:39.042922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:39.056622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:39.056645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:39.065789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:39.065815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:39.077624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:39.077663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.088 [2024-11-26 18:30:39.092533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.088 [2024-11-26 18:30:39.092573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.102151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.102177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.116700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.116740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.125995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.126019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.137852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.137878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.151115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.151155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.164949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.164976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.174456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.174484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.190276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.190327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.346 [2024-11-26 18:30:39.206016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.346 [2024-11-26 18:30:39.206070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.215857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.215882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.227721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.227747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.238459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.238486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.252476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.252503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.261624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.261664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.273148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.273173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.284200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.284227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.294937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.294963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.308130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.308157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.317541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.317568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.329468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.329494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.347 [2024-11-26 18:30:39.342461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.347 [2024-11-26 18:30:39.342488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.356899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.356926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.366148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.366174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.380075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.380102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.389512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.389540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.401241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.401267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.412096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.412121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.422264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.422289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.437108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.437134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.446457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.446483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.461064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.461090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.470577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.470619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.484219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.484246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.493558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.493602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.505137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.505163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.521116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.521142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.530482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.530508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.545063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.545089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.554671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.554697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.565857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.565882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.581686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.581737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.590483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.590510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.605215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.605241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.605 [2024-11-26 18:30:39.614362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.605 [2024-11-26 18:30:39.614389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.625707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.625732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.642502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.642529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 11763.00 IOPS, 91.90 MiB/s 00:34:51.864 Latency(us) 00:34:51.864 [2024-11-26T17:30:39.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.864 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:51.864 Nvme1n1 : 5.01 11768.92 91.94 0.00 0.00 10863.69 3046.21 17961.72 00:34:51.864 [2024-11-26T17:30:39.875Z] =================================================================================================================== 00:34:51.864 [2024-11-26T17:30:39.875Z] Total : 11768.92 91.94 0.00 0.00 10863.69 3046.21 17961.72 00:34:51.864 [2024-11-26 18:30:39.654423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.654450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.659682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.659705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.667696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.667720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.675700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.675727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.683741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.683779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.691744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.691785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.699737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.699777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.707747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.707790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.715743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.715782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.731769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.731822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.739741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.739791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.747742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.747782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.755745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.755787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.763745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.763786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.771742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.771783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.779741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.779782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.787740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.787781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.795738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.795774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.803727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.803766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.811695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.811730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.819676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.819695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.827680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.827700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.835695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.835715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.843735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.843774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.851740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.851778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.859748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.859785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:51.864 [2024-11-26 18:30:39.867695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:51.864 [2024-11-26 18:30:39.867715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.122 [2024-11-26 18:30:39.875698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.122 [2024-11-26 18:30:39.875732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.122 [2024-11-26 18:30:39.883694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:52.122 [2024-11-26 18:30:39.883714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:52.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (762196) - No such process 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 762196 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:52.122 delay0 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.122 18:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:52.122 [2024-11-26 18:30:40.008762] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:00.225 Initializing NVMe Controllers 00:35:00.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:00.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:00.225 Initialization complete. Launching workers. 00:35:00.225 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 247, failed: 20937 00:35:00.225 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21063, failed to submit 121 00:35:00.225 success 20971, unsuccessful 92, failed 0 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:00.225 rmmod nvme_tcp 00:35:00.225 rmmod nvme_fabrics 00:35:00.225 rmmod nvme_keyring 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 760866 ']' 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 760866 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 760866 ']' 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 760866 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 760866 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 760866' 00:35:00.225 killing process with pid 760866 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 760866 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 760866 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:00.225 18:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:01.601 00:35:01.601 real 0m28.926s 00:35:01.601 user 0m41.239s 00:35:01.601 sys 0m10.070s 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:01.601 ************************************ 00:35:01.601 END TEST nvmf_zcopy 00:35:01.601 ************************************ 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:01.601 ************************************ 00:35:01.601 START TEST nvmf_nmic 00:35:01.601 ************************************ 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:01.601 * Looking for test storage... 00:35:01.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:01.601 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:01.860 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:01.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.861 --rc genhtml_branch_coverage=1 00:35:01.861 --rc genhtml_function_coverage=1 00:35:01.861 --rc genhtml_legend=1 00:35:01.861 --rc geninfo_all_blocks=1 00:35:01.861 --rc geninfo_unexecuted_blocks=1 00:35:01.861 00:35:01.861 ' 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:01.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.861 --rc genhtml_branch_coverage=1 00:35:01.861 --rc genhtml_function_coverage=1 00:35:01.861 --rc genhtml_legend=1 00:35:01.861 --rc geninfo_all_blocks=1 00:35:01.861 --rc geninfo_unexecuted_blocks=1 00:35:01.861 00:35:01.861 ' 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:01.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.861 --rc genhtml_branch_coverage=1 00:35:01.861 --rc genhtml_function_coverage=1 00:35:01.861 --rc genhtml_legend=1 00:35:01.861 --rc geninfo_all_blocks=1 00:35:01.861 --rc geninfo_unexecuted_blocks=1 00:35:01.861 00:35:01.861 ' 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:01.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:01.861 --rc genhtml_branch_coverage=1 00:35:01.861 --rc genhtml_function_coverage=1 00:35:01.861 --rc genhtml_legend=1 00:35:01.861 --rc geninfo_all_blocks=1 00:35:01.861 --rc geninfo_unexecuted_blocks=1 00:35:01.861 00:35:01.861 ' 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.861 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:01.862 18:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:04.445 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:04.445 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:04.445 Found net devices under 0000:09:00.0: cvl_0_0 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:04.445 Found net devices under 0000:09:00.1: cvl_0_1 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:04.445 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:04.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:04.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:35:04.446 00:35:04.446 --- 10.0.0.2 ping statistics --- 00:35:04.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.446 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:04.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:04.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:35:04.446 00:35:04.446 --- 10.0.0.1 ping statistics --- 00:35:04.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.446 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:04.446 18:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=765700 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 765700 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 765700 ']' 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 [2024-11-26 18:30:52.050577] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:04.446 [2024-11-26 18:30:52.051674] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:35:04.446 [2024-11-26 18:30:52.051753] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.446 [2024-11-26 18:30:52.125254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:04.446 [2024-11-26 18:30:52.185218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.446 [2024-11-26 18:30:52.185267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.446 [2024-11-26 18:30:52.185312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.446 [2024-11-26 18:30:52.185324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.446 [2024-11-26 18:30:52.185333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.446 [2024-11-26 18:30:52.186891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.446 [2024-11-26 18:30:52.186997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:04.446 [2024-11-26 18:30:52.187083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:04.446 [2024-11-26 18:30:52.187091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.446 [2024-11-26 18:30:52.273771] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:04.446 [2024-11-26 18:30:52.273957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:04.446 [2024-11-26 18:30:52.274253] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:04.446 [2024-11-26 18:30:52.274826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:04.446 [2024-11-26 18:30:52.275039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 [2024-11-26 18:30:52.323917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 Malloc0 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 [2024-11-26 18:30:52.392070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:04.446 test case1: single bdev can't be used in multiple subsystems 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.446 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.446 [2024-11-26 18:30:52.415837] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:04.446 [2024-11-26 18:30:52.415865] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:04.446 [2024-11-26 18:30:52.415890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.446 request: 00:35:04.446 { 00:35:04.446 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:04.446 "namespace": { 00:35:04.446 "bdev_name": "Malloc0", 00:35:04.446 "no_auto_visible": false 00:35:04.446 }, 00:35:04.446 "method": "nvmf_subsystem_add_ns", 00:35:04.446 "req_id": 1 00:35:04.446 } 00:35:04.446 Got JSON-RPC error response 00:35:04.446 response: 00:35:04.446 { 00:35:04.447 "code": -32602, 00:35:04.447 "message": "Invalid parameters" 00:35:04.447 } 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:04.447 Adding namespace failed - expected result. 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:04.447 test case2: host connect to nvmf target in multiple paths 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:04.447 [2024-11-26 18:30:52.423926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.447 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:04.704 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:04.960 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:04.960 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:04.960 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:04.960 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:04.960 18:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:06.856 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:06.856 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:06.856 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:06.856 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:06.856 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:06.856 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:06.856 18:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:06.856 [global] 00:35:06.856 thread=1 00:35:06.856 invalidate=1 00:35:06.856 rw=write 00:35:06.856 time_based=1 00:35:06.856 runtime=1 00:35:06.856 ioengine=libaio 00:35:06.856 direct=1 00:35:06.856 bs=4096 00:35:06.856 iodepth=1 00:35:06.856 norandommap=0 00:35:06.856 numjobs=1 00:35:06.856 00:35:06.856 verify_dump=1 00:35:06.856 verify_backlog=512 00:35:06.856 verify_state_save=0 00:35:06.856 do_verify=1 00:35:06.856 verify=crc32c-intel 00:35:06.856 [job0] 00:35:06.856 filename=/dev/nvme0n1 00:35:06.856 Could not set queue depth (nvme0n1) 00:35:07.113 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:07.113 fio-3.35 00:35:07.113 Starting 1 thread 00:35:08.484 00:35:08.484 job0: (groupid=0, jobs=1): err= 0: pid=766083: Tue Nov 26 18:30:56 2024 00:35:08.484 read: IOPS=1500, BW=6000KiB/s (6144kB/s)(6156KiB/1026msec) 00:35:08.484 slat (nsec): min=7262, max=63654, avg=14574.64, stdev=5828.33 00:35:08.484 clat (usec): min=212, max=42001, avg=347.82, stdev=1833.09 00:35:08.484 lat (usec): min=226, max=42017, avg=362.40, stdev=1833.54 00:35:08.484 clat percentiles (usec): 00:35:08.484 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 249], 00:35:08.484 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 262], 60.00th=[ 269], 00:35:08.484 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:35:08.484 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[41681], 99.95th=[42206], 00:35:08.484 | 99.99th=[42206] 00:35:08.484 write: IOPS=1996, BW=7984KiB/s (8176kB/s)(8192KiB/1026msec); 0 zone resets 00:35:08.484 slat (usec): min=8, max=30586, avg=33.97, stdev=675.66 00:35:08.484 clat (usec): min=140, max=294, avg=186.53, stdev=24.66 00:35:08.484 lat (usec): min=158, max=30809, avg=220.50, stdev=677.03 00:35:08.484 clat percentiles (usec): 00:35:08.484 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 163], 00:35:08.484 | 30.00th=[ 169], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 194], 00:35:08.484 | 70.00th=[ 198], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 225], 00:35:08.484 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 289], 99.95th=[ 293], 00:35:08.484 | 99.99th=[ 297] 00:35:08.484 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=2 00:35:08.484 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:35:08.484 lat (usec) : 250=65.40%, 500=34.21%, 750=0.31% 00:35:08.484 lat (msec) : 50=0.08% 00:35:08.484 cpu : usr=3.41%, sys=8.39%, ctx=3590, majf=0, minf=1 00:35:08.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.484 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:08.484 00:35:08.484 Run status group 0 (all jobs): 00:35:08.484 READ: bw=6000KiB/s (6144kB/s), 6000KiB/s-6000KiB/s (6144kB/s-6144kB/s), io=6156KiB (6304kB), run=1026-1026msec 00:35:08.484 WRITE: bw=7984KiB/s (8176kB/s), 7984KiB/s-7984KiB/s (8176kB/s-8176kB/s), io=8192KiB (8389kB), run=1026-1026msec 00:35:08.484 00:35:08.484 Disk stats (read/write): 00:35:08.484 nvme0n1: ios=1578/2040, merge=0/0, ticks=580/371, in_queue=951, util=99.10% 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:08.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:08.484 rmmod nvme_tcp 00:35:08.484 rmmod nvme_fabrics 00:35:08.484 rmmod nvme_keyring 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 765700 ']' 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 765700 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 765700 ']' 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 765700 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 765700 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:08.484 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 765700' 00:35:08.485 killing process with pid 765700 00:35:08.485 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 765700 00:35:08.485 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 765700 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:08.743 18:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.647 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:10.647 00:35:10.647 real 0m9.116s 00:35:10.647 user 0m16.613s 00:35:10.647 sys 0m3.595s 00:35:10.647 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.647 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:10.647 ************************************ 00:35:10.647 END TEST nvmf_nmic 00:35:10.647 ************************************ 00:35:10.906 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:10.906 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:10.906 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:10.907 ************************************ 00:35:10.907 START TEST nvmf_fio_target 00:35:10.907 ************************************ 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:10.907 * Looking for test storage... 00:35:10.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:10.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.907 --rc genhtml_branch_coverage=1 00:35:10.907 --rc genhtml_function_coverage=1 00:35:10.907 --rc genhtml_legend=1 00:35:10.907 --rc geninfo_all_blocks=1 00:35:10.907 --rc geninfo_unexecuted_blocks=1 00:35:10.907 00:35:10.907 ' 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:10.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.907 --rc genhtml_branch_coverage=1 00:35:10.907 --rc genhtml_function_coverage=1 00:35:10.907 --rc genhtml_legend=1 00:35:10.907 --rc geninfo_all_blocks=1 00:35:10.907 --rc geninfo_unexecuted_blocks=1 00:35:10.907 00:35:10.907 ' 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:10.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.907 --rc genhtml_branch_coverage=1 00:35:10.907 --rc genhtml_function_coverage=1 00:35:10.907 --rc genhtml_legend=1 00:35:10.907 --rc geninfo_all_blocks=1 00:35:10.907 --rc geninfo_unexecuted_blocks=1 00:35:10.907 00:35:10.907 ' 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:10.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.907 --rc genhtml_branch_coverage=1 00:35:10.907 --rc genhtml_function_coverage=1 00:35:10.907 --rc genhtml_legend=1 00:35:10.907 --rc geninfo_all_blocks=1 00:35:10.907 --rc geninfo_unexecuted_blocks=1 00:35:10.907 00:35:10.907 ' 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.907 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:10.908 18:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:13.463 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:13.463 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:13.463 Found net devices under 0000:09:00.0: cvl_0_0 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:13.463 Found net devices under 0000:09:00.1: cvl_0_1 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:13.463 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:13.464 18:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:13.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:13.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:35:13.464 00:35:13.464 --- 10.0.0.2 ping statistics --- 00:35:13.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.464 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:13.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:13.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:35:13.464 00:35:13.464 --- 10.0.0.1 ping statistics --- 00:35:13.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:13.464 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=768275 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 768275 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 768275 ']' 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:13.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.464 [2024-11-26 18:31:01.108559] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:13.464 [2024-11-26 18:31:01.109601] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:35:13.464 [2024-11-26 18:31:01.109676] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:13.464 [2024-11-26 18:31:01.182231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:13.464 [2024-11-26 18:31:01.241627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:13.464 [2024-11-26 18:31:01.241684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:13.464 [2024-11-26 18:31:01.241697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:13.464 [2024-11-26 18:31:01.241708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:13.464 [2024-11-26 18:31:01.241717] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:13.464 [2024-11-26 18:31:01.243332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:13.464 [2024-11-26 18:31:01.243394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:13.464 [2024-11-26 18:31:01.243465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:13.464 [2024-11-26 18:31:01.243461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:13.464 [2024-11-26 18:31:01.341412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:13.464 [2024-11-26 18:31:01.341573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:13.464 [2024-11-26 18:31:01.341843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:13.464 [2024-11-26 18:31:01.342513] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:13.464 [2024-11-26 18:31:01.342759] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:13.464 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:13.723 [2024-11-26 18:31:01.694070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:13.723 18:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:14.289 18:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:14.289 18:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:14.546 18:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:14.546 18:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:14.804 18:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:14.804 18:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:15.061 18:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:15.061 18:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:15.319 18:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:15.577 18:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:15.577 18:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:15.835 18:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:15.835 18:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:16.093 18:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:16.093 18:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:16.659 18:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:16.659 18:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:16.659 18:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:16.917 18:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:16.917 18:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:17.174 18:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:17.432 [2024-11-26 18:31:05.414201] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:17.432 18:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:17.997 18:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:18.255 18:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:18.255 18:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:18.255 18:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:18.255 18:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:18.255 18:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:18.255 18:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:18.255 18:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:20.781 18:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:20.781 18:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:20.781 18:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:20.781 18:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:20.781 18:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:20.781 18:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:20.781 18:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:20.781 [global] 00:35:20.781 thread=1 00:35:20.781 invalidate=1 00:35:20.781 rw=write 00:35:20.781 time_based=1 00:35:20.781 runtime=1 00:35:20.781 ioengine=libaio 00:35:20.781 direct=1 00:35:20.781 bs=4096 00:35:20.781 iodepth=1 00:35:20.781 norandommap=0 00:35:20.781 numjobs=1 00:35:20.781 00:35:20.781 verify_dump=1 00:35:20.781 verify_backlog=512 00:35:20.781 verify_state_save=0 00:35:20.781 do_verify=1 00:35:20.781 verify=crc32c-intel 00:35:20.781 [job0] 00:35:20.781 filename=/dev/nvme0n1 00:35:20.781 [job1] 00:35:20.781 filename=/dev/nvme0n2 00:35:20.781 [job2] 00:35:20.781 filename=/dev/nvme0n3 00:35:20.781 [job3] 00:35:20.781 filename=/dev/nvme0n4 00:35:20.781 Could not set queue depth (nvme0n1) 00:35:20.781 Could not set queue depth (nvme0n2) 00:35:20.781 Could not set queue depth (nvme0n3) 00:35:20.781 Could not set queue depth (nvme0n4) 00:35:20.781 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:20.781 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:20.781 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:20.781 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:20.781 fio-3.35 00:35:20.781 Starting 4 threads 00:35:22.159 00:35:22.159 job0: (groupid=0, jobs=1): err= 0: pid=769227: Tue Nov 26 18:31:09 2024 00:35:22.159 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:35:22.159 slat (nsec): min=4463, max=57751, avg=11766.09, stdev=6715.05 00:35:22.159 clat (usec): min=206, max=840, avg=378.21, stdev=117.66 00:35:22.159 lat (usec): min=214, max=847, avg=389.98, stdev=119.93 00:35:22.159 clat percentiles (usec): 00:35:22.159 | 1.00th=[ 215], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 265], 00:35:22.159 | 30.00th=[ 314], 40.00th=[ 334], 50.00th=[ 359], 60.00th=[ 392], 00:35:22.159 | 70.00th=[ 424], 80.00th=[ 478], 90.00th=[ 553], 95.00th=[ 611], 00:35:22.159 | 99.00th=[ 693], 99.50th=[ 709], 99.90th=[ 799], 99.95th=[ 840], 00:35:22.159 | 99.99th=[ 840] 00:35:22.159 write: IOPS=1658, BW=6633KiB/s (6793kB/s)(6640KiB/1001msec); 0 zone resets 00:35:22.159 slat (nsec): min=5606, max=49570, avg=11424.43, stdev=6279.13 00:35:22.159 clat (usec): min=134, max=968, avg=223.42, stdev=63.95 00:35:22.159 lat (usec): min=143, max=978, avg=234.85, stdev=64.95 00:35:22.159 clat percentiles (usec): 00:35:22.159 | 1.00th=[ 145], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 176], 00:35:22.159 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 200], 60.00th=[ 223], 00:35:22.159 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 330], 00:35:22.159 | 99.00th=[ 420], 99.50th=[ 502], 99.90th=[ 783], 99.95th=[ 971], 00:35:22.159 | 99.99th=[ 971] 00:35:22.159 bw ( KiB/s): min= 8192, max= 8192, per=40.58%, avg=8192.00, stdev= 0.00, samples=1 00:35:22.159 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:35:22.159 lat (usec) : 250=43.96%, 500=48.31%, 750=7.57%, 1000=0.16% 00:35:22.159 cpu : usr=3.20%, sys=4.40%, ctx=3196, majf=0, minf=1 00:35:22.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.159 issued rwts: total=1536,1660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:22.159 job1: (groupid=0, jobs=1): err= 0: pid=769228: Tue Nov 26 18:31:09 2024 00:35:22.159 read: IOPS=1469, BW=5879KiB/s (6020kB/s)(6108KiB/1039msec) 00:35:22.159 slat (nsec): min=5640, max=37906, avg=11365.30, stdev=5867.27 00:35:22.159 clat (usec): min=193, max=41597, avg=396.44, stdev=1058.39 00:35:22.159 lat (usec): min=199, max=41630, avg=407.81, stdev=1059.04 00:35:22.159 clat percentiles (usec): 00:35:22.159 | 1.00th=[ 219], 5.00th=[ 243], 10.00th=[ 265], 20.00th=[ 302], 00:35:22.159 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 375], 00:35:22.159 | 70.00th=[ 408], 80.00th=[ 441], 90.00th=[ 486], 95.00th=[ 523], 00:35:22.159 | 99.00th=[ 611], 99.50th=[ 619], 99.90th=[ 676], 99.95th=[41681], 00:35:22.159 | 99.99th=[41681] 00:35:22.159 write: IOPS=1478, BW=5913KiB/s (6055kB/s)(6144KiB/1039msec); 0 zone resets 00:35:22.159 slat (nsec): min=7253, max=41355, avg=12308.93, stdev=5929.99 00:35:22.159 clat (usec): min=135, max=476, avg=250.86, stdev=46.94 00:35:22.159 lat (usec): min=143, max=491, avg=263.17, stdev=46.47 00:35:22.159 clat percentiles (usec): 00:35:22.159 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 210], 00:35:22.159 | 30.00th=[ 229], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 258], 00:35:22.159 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 343], 00:35:22.159 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 469], 99.95th=[ 478], 00:35:22.159 | 99.99th=[ 478] 00:35:22.160 bw ( KiB/s): min= 4096, max= 8192, per=30.43%, avg=6144.00, stdev=2896.31, samples=2 00:35:22.160 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:35:22.160 lat (usec) : 250=28.47%, 500=67.65%, 750=3.85% 00:35:22.160 lat (msec) : 50=0.03% 00:35:22.160 cpu : usr=2.50%, sys=5.01%, ctx=3064, majf=0, minf=1 00:35:22.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.160 issued rwts: total=1527,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:22.160 job2: (groupid=0, jobs=1): err= 0: pid=769231: Tue Nov 26 18:31:09 2024 00:35:22.160 read: IOPS=1320, BW=5283KiB/s (5410kB/s)(5288KiB/1001msec) 00:35:22.160 slat (nsec): min=6366, max=53147, avg=13327.15, stdev=7031.91 00:35:22.160 clat (usec): min=236, max=1022, avg=421.86, stdev=94.76 00:35:22.160 lat (usec): min=244, max=1028, avg=435.18, stdev=95.21 00:35:22.160 clat percentiles (usec): 00:35:22.160 | 1.00th=[ 262], 5.00th=[ 285], 10.00th=[ 310], 20.00th=[ 334], 00:35:22.160 | 30.00th=[ 359], 40.00th=[ 392], 50.00th=[ 416], 60.00th=[ 445], 00:35:22.160 | 70.00th=[ 469], 80.00th=[ 498], 90.00th=[ 537], 95.00th=[ 578], 00:35:22.160 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 930], 99.95th=[ 1020], 00:35:22.160 | 99.99th=[ 1020] 00:35:22.160 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:35:22.160 slat (nsec): min=8167, max=61156, avg=15051.47, stdev=7811.91 00:35:22.160 clat (usec): min=145, max=500, avg=253.02, stdev=48.74 00:35:22.160 lat (usec): min=154, max=525, avg=268.07, stdev=48.69 00:35:22.160 clat percentiles (usec): 00:35:22.160 | 1.00th=[ 157], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 210], 00:35:22.160 | 30.00th=[ 229], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 262], 00:35:22.160 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 343], 00:35:22.160 | 99.00th=[ 404], 99.50th=[ 437], 99.90th=[ 482], 99.95th=[ 502], 00:35:22.160 | 99.99th=[ 502] 00:35:22.160 bw ( KiB/s): min= 7816, max= 7816, per=38.71%, avg=7816.00, stdev= 0.00, samples=1 00:35:22.160 iops : min= 1954, max= 1954, avg=1954.00, stdev= 0.00, samples=1 00:35:22.160 lat (usec) : 250=24.81%, 500=66.10%, 750=8.96%, 1000=0.10% 00:35:22.160 lat (msec) : 2=0.03% 00:35:22.160 cpu : usr=2.70%, sys=5.60%, ctx=2861, majf=0, minf=1 00:35:22.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.160 issued rwts: total=1322,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:22.160 job3: (groupid=0, jobs=1): err= 0: pid=769232: Tue Nov 26 18:31:09 2024 00:35:22.160 read: IOPS=23, BW=92.9KiB/s (95.2kB/s)(96.0KiB/1033msec) 00:35:22.160 slat (nsec): min=9440, max=34882, avg=20510.33, stdev=9458.63 00:35:22.160 clat (usec): min=397, max=42026, avg=38388.28, stdev=11699.17 00:35:22.160 lat (usec): min=418, max=42040, avg=38408.79, stdev=11697.06 00:35:22.160 clat percentiles (usec): 00:35:22.160 | 1.00th=[ 400], 5.00th=[ 437], 10.00th=[41157], 20.00th=[41157], 00:35:22.160 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:22.160 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:22.160 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:22.160 | 99.99th=[42206] 00:35:22.160 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:35:22.160 slat (nsec): min=7687, max=24898, avg=9378.54, stdev=2593.97 00:35:22.160 clat (usec): min=184, max=273, avg=204.01, stdev=10.79 00:35:22.160 lat (usec): min=192, max=297, avg=213.39, stdev=11.50 00:35:22.160 clat percentiles (usec): 00:35:22.160 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:35:22.160 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:35:22.160 | 70.00th=[ 210], 80.00th=[ 212], 90.00th=[ 217], 95.00th=[ 225], 00:35:22.160 | 99.00th=[ 233], 99.50th=[ 239], 99.90th=[ 273], 99.95th=[ 273], 00:35:22.160 | 99.99th=[ 273] 00:35:22.160 bw ( KiB/s): min= 4096, max= 4096, per=20.29%, avg=4096.00, stdev= 0.00, samples=1 00:35:22.160 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:22.160 lat (usec) : 250=95.34%, 500=0.56% 00:35:22.160 lat (msec) : 50=4.10% 00:35:22.160 cpu : usr=0.39%, sys=0.58%, ctx=536, majf=0, minf=2 00:35:22.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.160 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:22.160 00:35:22.160 Run status group 0 (all jobs): 00:35:22.160 READ: bw=16.6MiB/s (17.4MB/s), 92.9KiB/s-6138KiB/s (95.2kB/s-6285kB/s), io=17.2MiB (18.1MB), run=1001-1039msec 00:35:22.160 WRITE: bw=19.7MiB/s (20.7MB/s), 1983KiB/s-6633KiB/s (2030kB/s-6793kB/s), io=20.5MiB (21.5MB), run=1001-1039msec 00:35:22.160 00:35:22.160 Disk stats (read/write): 00:35:22.160 nvme0n1: ios=1312/1536, merge=0/0, ticks=475/328, in_queue=803, util=86.47% 00:35:22.160 nvme0n2: ios=1180/1536, merge=0/0, ticks=426/378, in_queue=804, util=86.67% 00:35:22.160 nvme0n3: ios=1079/1452, merge=0/0, ticks=1367/341, in_queue=1708, util=98.01% 00:35:22.160 nvme0n4: ios=63/512, merge=0/0, ticks=721/103, in_queue=824, util=90.52% 00:35:22.160 18:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:22.160 [global] 00:35:22.160 thread=1 00:35:22.160 invalidate=1 00:35:22.160 rw=randwrite 00:35:22.160 time_based=1 00:35:22.160 runtime=1 00:35:22.160 ioengine=libaio 00:35:22.160 direct=1 00:35:22.160 bs=4096 00:35:22.160 iodepth=1 00:35:22.160 norandommap=0 00:35:22.160 numjobs=1 00:35:22.160 00:35:22.160 verify_dump=1 00:35:22.160 verify_backlog=512 00:35:22.160 verify_state_save=0 00:35:22.160 do_verify=1 00:35:22.160 verify=crc32c-intel 00:35:22.160 [job0] 00:35:22.160 filename=/dev/nvme0n1 00:35:22.160 [job1] 00:35:22.160 filename=/dev/nvme0n2 00:35:22.160 [job2] 00:35:22.160 filename=/dev/nvme0n3 00:35:22.160 [job3] 00:35:22.160 filename=/dev/nvme0n4 00:35:22.160 Could not set queue depth (nvme0n1) 00:35:22.160 Could not set queue depth (nvme0n2) 00:35:22.160 Could not set queue depth (nvme0n3) 00:35:22.160 Could not set queue depth (nvme0n4) 00:35:22.160 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.160 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.160 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.160 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.160 fio-3.35 00:35:22.160 Starting 4 threads 00:35:23.532 00:35:23.532 job0: (groupid=0, jobs=1): err= 0: pid=769508: Tue Nov 26 18:31:11 2024 00:35:23.532 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:35:23.532 slat (nsec): min=7607, max=14511, avg=13239.09, stdev=1699.04 00:35:23.532 clat (usec): min=266, max=42057, avg=39250.87, stdev=8715.33 00:35:23.532 lat (usec): min=280, max=42071, avg=39264.11, stdev=8715.30 00:35:23.532 clat percentiles (usec): 00:35:23.532 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:23.532 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:23.532 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:23.532 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:23.532 | 99.99th=[42206] 00:35:23.532 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:35:23.532 slat (nsec): min=7341, max=59971, avg=10323.47, stdev=3666.46 00:35:23.532 clat (usec): min=150, max=1044, avg=254.34, stdev=56.39 00:35:23.532 lat (usec): min=158, max=1052, avg=264.67, stdev=56.64 00:35:23.532 clat percentiles (usec): 00:35:23.532 | 1.00th=[ 157], 5.00th=[ 176], 10.00th=[ 192], 20.00th=[ 223], 00:35:23.532 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 260], 00:35:23.532 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 326], 95.00th=[ 334], 00:35:23.532 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 1045], 99.95th=[ 1045], 00:35:23.532 | 99.99th=[ 1045] 00:35:23.532 bw ( KiB/s): min= 4096, max= 4096, per=27.23%, avg=4096.00, stdev= 0.00, samples=1 00:35:23.532 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:23.532 lat (usec) : 250=49.63%, 500=46.25% 00:35:23.532 lat (msec) : 2=0.19%, 50=3.93% 00:35:23.532 cpu : usr=0.20%, sys=0.80%, ctx=534, majf=0, minf=1 00:35:23.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.532 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:23.532 job1: (groupid=0, jobs=1): err= 0: pid=769527: Tue Nov 26 18:31:11 2024 00:35:23.532 read: IOPS=23, BW=93.5KiB/s (95.7kB/s)(96.0KiB/1027msec) 00:35:23.532 slat (nsec): min=8253, max=16079, avg=13824.46, stdev=1460.10 00:35:23.532 clat (usec): min=357, max=41061, avg=37583.51, stdev=11437.77 00:35:23.532 lat (usec): min=365, max=41074, avg=37597.33, stdev=11438.28 00:35:23.532 clat percentiles (usec): 00:35:23.532 | 1.00th=[ 359], 5.00th=[ 537], 10.00th=[40633], 20.00th=[40633], 00:35:23.532 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:23.532 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:23.532 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:23.532 | 99.99th=[41157] 00:35:23.532 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:35:23.532 slat (usec): min=6, max=117, avg=16.04, stdev=10.69 00:35:23.532 clat (usec): min=158, max=988, avg=223.85, stdev=71.61 00:35:23.532 lat (usec): min=165, max=995, avg=239.88, stdev=69.22 00:35:23.532 clat percentiles (usec): 00:35:23.532 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:35:23.532 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 208], 00:35:23.532 | 70.00th=[ 262], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:35:23.532 | 99.00th=[ 359], 99.50th=[ 758], 99.90th=[ 988], 99.95th=[ 988], 00:35:23.532 | 99.99th=[ 988] 00:35:23.532 bw ( KiB/s): min= 4096, max= 4096, per=27.23%, avg=4096.00, stdev= 0.00, samples=1 00:35:23.532 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:23.532 lat (usec) : 250=61.57%, 500=33.40%, 750=0.37%, 1000=0.56% 00:35:23.532 lat (msec) : 50=4.10% 00:35:23.532 cpu : usr=0.39%, sys=0.68%, ctx=539, majf=0, minf=1 00:35:23.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.532 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:23.532 job2: (groupid=0, jobs=1): err= 0: pid=769559: Tue Nov 26 18:31:11 2024 00:35:23.532 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:35:23.532 slat (nsec): min=4310, max=43694, avg=5775.99, stdev=2669.02 00:35:23.532 clat (usec): min=193, max=531, avg=246.12, stdev=45.66 00:35:23.532 lat (usec): min=199, max=552, avg=251.89, stdev=46.99 00:35:23.532 clat percentiles (usec): 00:35:23.532 | 1.00th=[ 221], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 227], 00:35:23.532 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:35:23.532 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 302], 00:35:23.532 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 519], 99.95th=[ 519], 00:35:23.532 | 99.99th=[ 529] 00:35:23.532 write: IOPS=2323, BW=9295KiB/s (9518kB/s)(9304KiB/1001msec); 0 zone resets 00:35:23.533 slat (nsec): min=5630, max=54516, avg=8646.09, stdev=6346.05 00:35:23.533 clat (usec): min=159, max=527, avg=195.58, stdev=46.15 00:35:23.533 lat (usec): min=164, max=552, avg=204.23, stdev=49.33 00:35:23.533 clat percentiles (usec): 00:35:23.533 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 165], 00:35:23.533 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 186], 00:35:23.533 | 70.00th=[ 202], 80.00th=[ 215], 90.00th=[ 249], 95.00th=[ 289], 00:35:23.533 | 99.00th=[ 375], 99.50th=[ 404], 99.90th=[ 478], 99.95th=[ 506], 00:35:23.533 | 99.99th=[ 529] 00:35:23.533 bw ( KiB/s): min= 8656, max= 8656, per=57.55%, avg=8656.00, stdev= 0.00, samples=1 00:35:23.533 iops : min= 2164, max= 2164, avg=2164.00, stdev= 0.00, samples=1 00:35:23.533 lat (usec) : 250=85.32%, 500=14.29%, 750=0.39% 00:35:23.533 cpu : usr=2.00%, sys=2.90%, ctx=4374, majf=0, minf=2 00:35:23.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.533 issued rwts: total=2048,2326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:23.533 job3: (groupid=0, jobs=1): err= 0: pid=769571: Tue Nov 26 18:31:11 2024 00:35:23.533 read: IOPS=62, BW=252KiB/s (258kB/s)(256KiB/1016msec) 00:35:23.533 slat (nsec): min=5343, max=26973, avg=10482.44, stdev=5051.71 00:35:23.533 clat (usec): min=228, max=42043, avg=13921.67, stdev=19587.81 00:35:23.533 lat (usec): min=234, max=42059, avg=13932.15, stdev=19590.78 00:35:23.533 clat percentiles (usec): 00:35:23.533 | 1.00th=[ 229], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 245], 00:35:23.533 | 30.00th=[ 306], 40.00th=[ 371], 50.00th=[ 433], 60.00th=[ 465], 00:35:23.533 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:23.533 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:23.533 | 99.99th=[42206] 00:35:23.533 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:35:23.533 slat (nsec): min=6641, max=41944, avg=11654.87, stdev=5882.98 00:35:23.533 clat (usec): min=157, max=377, avg=227.71, stdev=37.21 00:35:23.533 lat (usec): min=165, max=384, avg=239.36, stdev=36.36 00:35:23.533 clat percentiles (usec): 00:35:23.533 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 194], 00:35:23.533 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 233], 00:35:23.533 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 285], 00:35:23.533 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 379], 99.95th=[ 379], 00:35:23.533 | 99.99th=[ 379] 00:35:23.533 bw ( KiB/s): min= 4096, max= 4096, per=27.23%, avg=4096.00, stdev= 0.00, samples=1 00:35:23.533 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:23.533 lat (usec) : 250=62.33%, 500=33.85%, 750=0.17% 00:35:23.533 lat (msec) : 50=3.65% 00:35:23.533 cpu : usr=0.59%, sys=0.30%, ctx=577, majf=0, minf=1 00:35:23.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.533 issued rwts: total=64,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:23.533 00:35:23.533 Run status group 0 (all jobs): 00:35:23.533 READ: bw=8405KiB/s (8607kB/s), 87.9KiB/s-8184KiB/s (90.0kB/s-8380kB/s), io=8632KiB (8839kB), run=1001-1027msec 00:35:23.533 WRITE: bw=14.7MiB/s (15.4MB/s), 1994KiB/s-9295KiB/s (2042kB/s-9518kB/s), io=15.1MiB (15.8MB), run=1001-1027msec 00:35:23.533 00:35:23.533 Disk stats (read/write): 00:35:23.533 nvme0n1: ios=68/512, merge=0/0, ticks=729/129, in_queue=858, util=86.37% 00:35:23.533 nvme0n2: ios=48/512, merge=0/0, ticks=879/105, in_queue=984, util=98.37% 00:35:23.533 nvme0n3: ios=1644/2048, merge=0/0, ticks=390/388, in_queue=778, util=88.78% 00:35:23.533 nvme0n4: ios=63/512, merge=0/0, ticks=909/113, in_queue=1022, util=98.52% 00:35:23.533 18:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:23.533 [global] 00:35:23.533 thread=1 00:35:23.533 invalidate=1 00:35:23.533 rw=write 00:35:23.533 time_based=1 00:35:23.533 runtime=1 00:35:23.533 ioengine=libaio 00:35:23.533 direct=1 00:35:23.533 bs=4096 00:35:23.533 iodepth=128 00:35:23.533 norandommap=0 00:35:23.533 numjobs=1 00:35:23.533 00:35:23.533 verify_dump=1 00:35:23.533 verify_backlog=512 00:35:23.533 verify_state_save=0 00:35:23.533 do_verify=1 00:35:23.533 verify=crc32c-intel 00:35:23.533 [job0] 00:35:23.533 filename=/dev/nvme0n1 00:35:23.533 [job1] 00:35:23.533 filename=/dev/nvme0n2 00:35:23.533 [job2] 00:35:23.533 filename=/dev/nvme0n3 00:35:23.533 [job3] 00:35:23.533 filename=/dev/nvme0n4 00:35:23.533 Could not set queue depth (nvme0n1) 00:35:23.533 Could not set queue depth (nvme0n2) 00:35:23.533 Could not set queue depth (nvme0n3) 00:35:23.533 Could not set queue depth (nvme0n4) 00:35:23.533 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:23.533 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:23.533 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:23.533 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:23.533 fio-3.35 00:35:23.533 Starting 4 threads 00:35:24.908 00:35:24.908 job0: (groupid=0, jobs=1): err= 0: pid=769804: Tue Nov 26 18:31:12 2024 00:35:24.908 read: IOPS=4391, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1003msec) 00:35:24.908 slat (usec): min=2, max=20394, avg=112.03, stdev=755.82 00:35:24.908 clat (usec): min=545, max=72128, avg=15532.74, stdev=12418.42 00:35:24.908 lat (usec): min=3505, max=77698, avg=15644.76, stdev=12483.80 00:35:24.908 clat percentiles (usec): 00:35:24.908 | 1.00th=[ 6652], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11338], 00:35:24.908 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:35:24.908 | 70.00th=[12518], 80.00th=[13173], 90.00th=[15008], 95.00th=[53216], 00:35:24.908 | 99.00th=[69731], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:35:24.908 | 99.99th=[71828] 00:35:24.908 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:35:24.908 slat (usec): min=3, max=15977, avg=100.75, stdev=610.02 00:35:24.908 clat (usec): min=7990, max=58278, avg=12696.93, stdev=6105.34 00:35:24.908 lat (usec): min=8009, max=59637, avg=12797.67, stdev=6157.06 00:35:24.908 clat percentiles (usec): 00:35:24.908 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[11076], 00:35:24.908 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:35:24.908 | 70.00th=[12125], 80.00th=[12387], 90.00th=[13698], 95.00th=[14615], 00:35:24.908 | 99.00th=[51119], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:35:24.908 | 99.99th=[58459] 00:35:24.908 bw ( KiB/s): min=14976, max=21888, per=29.36%, avg=18432.00, stdev=4887.52, samples=2 00:35:24.908 iops : min= 3744, max= 5472, avg=4608.00, stdev=1221.88, samples=2 00:35:24.908 lat (usec) : 750=0.01% 00:35:24.908 lat (msec) : 4=0.36%, 10=8.22%, 20=85.41%, 50=2.70%, 100=3.31% 00:35:24.908 cpu : usr=6.69%, sys=7.39%, ctx=402, majf=0, minf=1 00:35:24.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:24.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:24.908 issued rwts: total=4405,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:24.908 job1: (groupid=0, jobs=1): err= 0: pid=769805: Tue Nov 26 18:31:12 2024 00:35:24.908 read: IOPS=3731, BW=14.6MiB/s (15.3MB/s)(15.2MiB/1044msec) 00:35:24.908 slat (usec): min=2, max=26857, avg=143.91, stdev=1200.91 00:35:24.908 clat (usec): min=5852, max=72928, avg=19823.77, stdev=14732.35 00:35:24.908 lat (usec): min=5861, max=72968, avg=19967.68, stdev=14805.52 00:35:24.908 clat percentiles (usec): 00:35:24.908 | 1.00th=[ 5997], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10552], 00:35:24.908 | 30.00th=[11207], 40.00th=[12518], 50.00th=[15270], 60.00th=[16909], 00:35:24.908 | 70.00th=[17695], 80.00th=[22938], 90.00th=[44827], 95.00th=[58459], 00:35:24.908 | 99.00th=[70779], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:35:24.908 | 99.99th=[72877] 00:35:24.908 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:35:24.908 slat (usec): min=3, max=11887, avg=88.87, stdev=649.85 00:35:24.908 clat (usec): min=2833, max=57255, avg=13457.93, stdev=5397.37 00:35:24.908 lat (usec): min=2843, max=57264, avg=13546.80, stdev=5417.45 00:35:24.908 clat percentiles (usec): 00:35:24.908 | 1.00th=[ 5276], 5.00th=[ 7242], 10.00th=[ 8029], 20.00th=[10159], 00:35:24.908 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12518], 60.00th=[12911], 00:35:24.908 | 70.00th=[15401], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:35:24.908 | 99.00th=[37487], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:35:24.908 | 99.99th=[57410] 00:35:24.908 bw ( KiB/s): min=12288, max=20480, per=26.10%, avg=16384.00, stdev=5792.62, samples=2 00:35:24.908 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:35:24.908 lat (msec) : 4=0.08%, 10=14.89%, 20=70.98%, 50=9.35%, 100=4.70% 00:35:24.908 cpu : usr=3.07%, sys=5.37%, ctx=253, majf=0, minf=1 00:35:24.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:24.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:24.908 issued rwts: total=3896,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:24.908 job2: (groupid=0, jobs=1): err= 0: pid=769808: Tue Nov 26 18:31:12 2024 00:35:24.908 read: IOPS=3185, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1004msec) 00:35:24.908 slat (usec): min=3, max=16444, avg=142.38, stdev=978.17 00:35:24.908 clat (usec): min=1892, max=46968, avg=17441.96, stdev=6802.74 00:35:24.908 lat (usec): min=6746, max=46973, avg=17584.34, stdev=6870.10 00:35:24.908 clat percentiles (usec): 00:35:24.908 | 1.00th=[ 8586], 5.00th=[10552], 10.00th=[11600], 20.00th=[12780], 00:35:24.908 | 30.00th=[13042], 40.00th=[14353], 50.00th=[14877], 60.00th=[16057], 00:35:24.908 | 70.00th=[19006], 80.00th=[21627], 90.00th=[26608], 95.00th=[32113], 00:35:24.908 | 99.00th=[42730], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:35:24.908 | 99.99th=[46924] 00:35:24.908 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:35:24.908 slat (usec): min=4, max=15977, avg=141.78, stdev=762.52 00:35:24.908 clat (usec): min=1289, max=46959, avg=19961.73, stdev=9554.79 00:35:24.908 lat (usec): min=1308, max=46966, avg=20103.51, stdev=9621.02 00:35:24.908 clat percentiles (usec): 00:35:24.908 | 1.00th=[ 4490], 5.00th=[ 7046], 10.00th=[ 8094], 20.00th=[12518], 00:35:24.908 | 30.00th=[13698], 40.00th=[14222], 50.00th=[15139], 60.00th=[21890], 00:35:24.908 | 70.00th=[26346], 80.00th=[32113], 90.00th=[34341], 95.00th=[34866], 00:35:24.908 | 99.00th=[35390], 99.50th=[35390], 99.90th=[44827], 99.95th=[46924], 00:35:24.908 | 99.99th=[46924] 00:35:24.908 bw ( KiB/s): min=12272, max=16384, per=22.82%, avg=14328.00, stdev=2907.62, samples=2 00:35:24.908 iops : min= 3068, max= 4096, avg=3582.00, stdev=726.91, samples=2 00:35:24.908 lat (msec) : 2=0.15%, 4=0.35%, 10=7.18%, 20=57.15%, 50=35.17% 00:35:24.908 cpu : usr=4.39%, sys=7.98%, ctx=336, majf=0, minf=1 00:35:24.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:24.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:24.908 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:24.908 job3: (groupid=0, jobs=1): err= 0: pid=769809: Tue Nov 26 18:31:12 2024 00:35:24.908 read: IOPS=3699, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1003msec) 00:35:24.908 slat (usec): min=3, max=5401, avg=101.62, stdev=560.36 00:35:24.908 clat (usec): min=990, max=22481, avg=13340.08, stdev=2301.99 00:35:24.908 lat (usec): min=4523, max=22495, avg=13441.70, stdev=2330.67 00:35:24.908 clat percentiles (usec): 00:35:24.908 | 1.00th=[ 4752], 5.00th=[10552], 10.00th=[11469], 20.00th=[11994], 00:35:24.908 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:35:24.908 | 70.00th=[14222], 80.00th=[15139], 90.00th=[16057], 95.00th=[17957], 00:35:24.908 | 99.00th=[19530], 99.50th=[19792], 99.90th=[22414], 99.95th=[22414], 00:35:24.908 | 99.99th=[22414] 00:35:24.908 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:35:24.908 slat (usec): min=3, max=26319, avg=142.93, stdev=1065.02 00:35:24.908 clat (usec): min=7626, max=77045, avg=18875.59, stdev=13500.64 00:35:24.908 lat (usec): min=7648, max=77070, avg=19018.53, stdev=13581.24 00:35:24.908 clat percentiles (usec): 00:35:24.908 | 1.00th=[ 9503], 5.00th=[11731], 10.00th=[12125], 20.00th=[12649], 00:35:24.908 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13829], 00:35:24.908 | 70.00th=[14353], 80.00th=[17695], 90.00th=[42730], 95.00th=[53740], 00:35:24.908 | 99.00th=[74974], 99.50th=[76022], 99.90th=[77071], 99.95th=[77071], 00:35:24.908 | 99.99th=[77071] 00:35:24.908 bw ( KiB/s): min=13768, max=18992, per=26.09%, avg=16380.00, stdev=3693.93, samples=2 00:35:24.908 iops : min= 3442, max= 4748, avg=4095.00, stdev=923.48, samples=2 00:35:24.908 lat (usec) : 1000=0.01% 00:35:24.908 lat (msec) : 10=2.38%, 20=87.58%, 50=6.83%, 100=3.20% 00:35:24.908 cpu : usr=4.99%, sys=7.88%, ctx=260, majf=0, minf=1 00:35:24.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:35:24.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:24.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:24.908 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:24.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:24.908 00:35:24.908 Run status group 0 (all jobs): 00:35:24.908 READ: bw=56.9MiB/s (59.7MB/s), 12.4MiB/s-17.2MiB/s (13.0MB/s-18.0MB/s), io=59.4MiB (62.3MB), run=1003-1044msec 00:35:24.908 WRITE: bw=61.3MiB/s (64.3MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=64.0MiB (67.1MB), run=1003-1044msec 00:35:24.908 00:35:24.908 Disk stats (read/write): 00:35:24.908 nvme0n1: ios=3634/3744, merge=0/0, ticks=13761/11307, in_queue=25068, util=86.67% 00:35:24.908 nvme0n2: ios=3368/3584, merge=0/0, ticks=41212/32879, in_queue=74091, util=100.00% 00:35:24.908 nvme0n3: ios=2934/3072, merge=0/0, ticks=48571/54418, in_queue=102989, util=88.81% 00:35:24.908 nvme0n4: ios=3072/3213, merge=0/0, ticks=14481/21073, in_queue=35554, util=89.65% 00:35:24.909 18:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:24.909 [global] 00:35:24.909 thread=1 00:35:24.909 invalidate=1 00:35:24.909 rw=randwrite 00:35:24.909 time_based=1 00:35:24.909 runtime=1 00:35:24.909 ioengine=libaio 00:35:24.909 direct=1 00:35:24.909 bs=4096 00:35:24.909 iodepth=128 00:35:24.909 norandommap=0 00:35:24.909 numjobs=1 00:35:24.909 00:35:24.909 verify_dump=1 00:35:24.909 verify_backlog=512 00:35:24.909 verify_state_save=0 00:35:24.909 do_verify=1 00:35:24.909 verify=crc32c-intel 00:35:24.909 [job0] 00:35:24.909 filename=/dev/nvme0n1 00:35:24.909 [job1] 00:35:24.909 filename=/dev/nvme0n2 00:35:24.909 [job2] 00:35:24.909 filename=/dev/nvme0n3 00:35:24.909 [job3] 00:35:24.909 filename=/dev/nvme0n4 00:35:24.909 Could not set queue depth (nvme0n1) 00:35:24.909 Could not set queue depth (nvme0n2) 00:35:24.909 Could not set queue depth (nvme0n3) 00:35:24.909 Could not set queue depth (nvme0n4) 00:35:25.168 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:25.168 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:25.168 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:25.168 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:25.168 fio-3.35 00:35:25.168 Starting 4 threads 00:35:26.541 00:35:26.541 job0: (groupid=0, jobs=1): err= 0: pid=770043: Tue Nov 26 18:31:14 2024 00:35:26.541 read: IOPS=5547, BW=21.7MiB/s (22.7MB/s)(21.7MiB/1003msec) 00:35:26.541 slat (usec): min=2, max=2886, avg=85.40, stdev=388.17 00:35:26.541 clat (usec): min=789, max=65255, avg=11516.48, stdev=3119.04 00:35:26.541 lat (usec): min=2455, max=67745, avg=11601.88, stdev=3116.82 00:35:26.541 clat percentiles (usec): 00:35:26.541 | 1.00th=[ 5211], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10552], 00:35:26.541 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:35:26.541 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12911], 00:35:26.541 | 99.00th=[27919], 99.50th=[30278], 99.90th=[65274], 99.95th=[65274], 00:35:26.541 | 99.99th=[65274] 00:35:26.541 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:35:26.541 slat (usec): min=3, max=20557, avg=82.98, stdev=437.97 00:35:26.541 clat (usec): min=6108, max=54706, avg=11131.21, stdev=3401.78 00:35:26.541 lat (usec): min=6116, max=54712, avg=11214.19, stdev=3395.01 00:35:26.541 clat percentiles (usec): 00:35:26.541 | 1.00th=[ 6325], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:35:26.541 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:35:26.541 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12256], 95.00th=[12518], 00:35:26.541 | 99.00th=[13304], 99.50th=[40109], 99.90th=[54789], 99.95th=[54789], 00:35:26.541 | 99.99th=[54789] 00:35:26.541 bw ( KiB/s): min=20984, max=24072, per=36.97%, avg=22528.00, stdev=2183.55, samples=2 00:35:26.541 iops : min= 5246, max= 6018, avg=5632.00, stdev=545.89, samples=2 00:35:26.541 lat (usec) : 1000=0.01% 00:35:26.541 lat (msec) : 4=0.24%, 10=13.84%, 20=84.77%, 50=0.82%, 100=0.31% 00:35:26.541 cpu : usr=6.69%, sys=11.88%, ctx=640, majf=0, minf=1 00:35:26.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:26.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:26.541 issued rwts: total=5564,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:26.541 job1: (groupid=0, jobs=1): err= 0: pid=770044: Tue Nov 26 18:31:14 2024 00:35:26.541 read: IOPS=2221, BW=8885KiB/s (9099kB/s)(9312KiB/1048msec) 00:35:26.541 slat (usec): min=2, max=29601, avg=217.50, stdev=1396.54 00:35:26.541 clat (msec): min=7, max=121, avg=28.25, stdev=20.18 00:35:26.541 lat (msec): min=7, max=121, avg=28.47, stdev=20.34 00:35:26.541 clat percentiles (msec): 00:35:26.541 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:35:26.541 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 23], 60.00th=[ 25], 00:35:26.541 | 70.00th=[ 26], 80.00th=[ 30], 90.00th=[ 67], 95.00th=[ 81], 00:35:26.541 | 99.00th=[ 103], 99.50th=[ 122], 99.90th=[ 122], 99.95th=[ 122], 00:35:26.541 | 99.99th=[ 122] 00:35:26.541 write: IOPS=2442, BW=9771KiB/s (10.0MB/s)(10.0MiB/1048msec); 0 zone resets 00:35:26.541 slat (usec): min=2, max=9658, avg=180.94, stdev=975.37 00:35:26.541 clat (usec): min=8078, max=98383, avg=25873.21, stdev=11504.78 00:35:26.542 lat (usec): min=8082, max=98390, avg=26054.14, stdev=11569.95 00:35:26.542 clat percentiles (usec): 00:35:26.542 | 1.00th=[11207], 5.00th=[15795], 10.00th=[15926], 20.00th=[16450], 00:35:26.542 | 30.00th=[18482], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:35:26.542 | 70.00th=[23725], 80.00th=[36439], 90.00th=[43254], 95.00th=[44303], 00:35:26.542 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[77071], 00:35:26.542 | 99.99th=[98042] 00:35:26.542 bw ( KiB/s): min= 8192, max=12312, per=16.82%, avg=10252.00, stdev=2913.28, samples=2 00:35:26.542 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:35:26.542 lat (msec) : 10=1.17%, 20=35.39%, 50=56.20%, 100=6.28%, 250=0.96% 00:35:26.542 cpu : usr=2.67%, sys=4.49%, ctx=215, majf=0, minf=1 00:35:26.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:35:26.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:26.542 issued rwts: total=2328,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:26.542 job2: (groupid=0, jobs=1): err= 0: pid=770045: Tue Nov 26 18:31:14 2024 00:35:26.542 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:35:26.542 slat (usec): min=2, max=7957, avg=144.74, stdev=804.67 00:35:26.542 clat (usec): min=9020, max=37332, avg=18408.23, stdev=5280.78 00:35:26.542 lat (usec): min=9025, max=42097, avg=18552.97, stdev=5344.45 00:35:26.542 clat percentiles (usec): 00:35:26.542 | 1.00th=[10290], 5.00th=[11731], 10.00th=[12387], 20.00th=[12911], 00:35:26.542 | 30.00th=[13698], 40.00th=[16319], 50.00th=[17695], 60.00th=[19268], 00:35:26.542 | 70.00th=[21365], 80.00th=[23462], 90.00th=[25297], 95.00th=[27395], 00:35:26.542 | 99.00th=[32637], 99.50th=[32900], 99.90th=[36439], 99.95th=[37487], 00:35:26.542 | 99.99th=[37487] 00:35:26.542 write: IOPS=3153, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1004msec); 0 zone resets 00:35:26.542 slat (usec): min=3, max=16004, avg=168.57, stdev=913.61 00:35:26.542 clat (usec): min=3635, max=42998, avg=22132.49, stdev=8676.31 00:35:26.542 lat (usec): min=3641, max=43004, avg=22301.06, stdev=8762.43 00:35:26.542 clat percentiles (usec): 00:35:26.542 | 1.00th=[ 6718], 5.00th=[12256], 10.00th=[12911], 20.00th=[14353], 00:35:26.542 | 30.00th=[16319], 40.00th=[19268], 50.00th=[22152], 60.00th=[23200], 00:35:26.542 | 70.00th=[23987], 80.00th=[27395], 90.00th=[37487], 95.00th=[41157], 00:35:26.542 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:35:26.542 | 99.99th=[43254] 00:35:26.542 bw ( KiB/s): min=10704, max=13872, per=20.16%, avg=12288.00, stdev=2240.11, samples=2 00:35:26.542 iops : min= 2676, max= 3468, avg=3072.00, stdev=560.03, samples=2 00:35:26.542 lat (msec) : 4=0.08%, 10=1.54%, 20=54.18%, 50=44.20% 00:35:26.542 cpu : usr=3.09%, sys=3.39%, ctx=299, majf=0, minf=1 00:35:26.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:35:26.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:26.542 issued rwts: total=3072,3166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:26.542 job3: (groupid=0, jobs=1): err= 0: pid=770046: Tue Nov 26 18:31:14 2024 00:35:26.542 read: IOPS=4534, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1007msec) 00:35:26.542 slat (usec): min=3, max=11361, avg=92.94, stdev=688.37 00:35:26.542 clat (usec): min=6443, max=56115, avg=13798.49, stdev=3581.94 00:35:26.542 lat (usec): min=6451, max=56121, avg=13891.43, stdev=3614.68 00:35:26.542 clat percentiles (usec): 00:35:26.542 | 1.00th=[ 8029], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[11076], 00:35:26.542 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13304], 60.00th=[13960], 00:35:26.542 | 70.00th=[14877], 80.00th=[15795], 90.00th=[17695], 95.00th=[20579], 00:35:26.542 | 99.00th=[24249], 99.50th=[26608], 99.90th=[55837], 99.95th=[55837], 00:35:26.542 | 99.99th=[56361] 00:35:26.542 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:35:26.542 slat (usec): min=4, max=13859, avg=100.69, stdev=693.01 00:35:26.542 clat (usec): min=2841, max=61028, avg=14046.98, stdev=4824.74 00:35:26.542 lat (usec): min=2850, max=61057, avg=14147.66, stdev=4863.34 00:35:26.542 clat percentiles (usec): 00:35:26.542 | 1.00th=[ 5014], 5.00th=[ 9765], 10.00th=[11600], 20.00th=[12387], 00:35:26.542 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13304], 00:35:26.542 | 70.00th=[13566], 80.00th=[14222], 90.00th=[17695], 95.00th=[21890], 00:35:26.542 | 99.00th=[38011], 99.50th=[45876], 99.90th=[55837], 99.95th=[55837], 00:35:26.542 | 99.99th=[61080] 00:35:26.542 bw ( KiB/s): min=16904, max=19960, per=30.25%, avg=18432.00, stdev=2160.92, samples=2 00:35:26.542 iops : min= 4226, max= 4990, avg=4608.00, stdev=540.23, samples=2 00:35:26.542 lat (msec) : 4=0.25%, 10=7.91%, 20=85.28%, 50=6.25%, 100=0.31% 00:35:26.542 cpu : usr=5.86%, sys=9.54%, ctx=393, majf=0, minf=1 00:35:26.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:26.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:26.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:26.542 issued rwts: total=4566,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:26.542 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:26.542 00:35:26.542 Run status group 0 (all jobs): 00:35:26.542 READ: bw=57.9MiB/s (60.7MB/s), 8885KiB/s-21.7MiB/s (9099kB/s-22.7MB/s), io=60.7MiB (63.6MB), run=1003-1048msec 00:35:26.542 WRITE: bw=59.5MiB/s (62.4MB/s), 9771KiB/s-21.9MiB/s (10.0MB/s-23.0MB/s), io=62.4MiB (65.4MB), run=1003-1048msec 00:35:26.542 00:35:26.542 Disk stats (read/write): 00:35:26.542 nvme0n1: ios=4651/4608, merge=0/0, ticks=12887/11612, in_queue=24499, util=89.08% 00:35:26.542 nvme0n2: ios=2048/2528, merge=0/0, ticks=15377/20662, in_queue=36039, util=84.85% 00:35:26.542 nvme0n3: ios=2048/2452, merge=0/0, ticks=14161/19391, in_queue=33552, util=88.29% 00:35:26.542 nvme0n4: ios=3723/4079, merge=0/0, ticks=34996/40775, in_queue=75771, util=100.00% 00:35:26.542 18:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:26.542 18:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=770178 00:35:26.542 18:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:26.542 18:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:26.542 [global] 00:35:26.542 thread=1 00:35:26.542 invalidate=1 00:35:26.542 rw=read 00:35:26.542 time_based=1 00:35:26.542 runtime=10 00:35:26.542 ioengine=libaio 00:35:26.542 direct=1 00:35:26.542 bs=4096 00:35:26.542 iodepth=1 00:35:26.542 norandommap=1 00:35:26.542 numjobs=1 00:35:26.542 00:35:26.542 [job0] 00:35:26.542 filename=/dev/nvme0n1 00:35:26.542 [job1] 00:35:26.542 filename=/dev/nvme0n2 00:35:26.542 [job2] 00:35:26.542 filename=/dev/nvme0n3 00:35:26.542 [job3] 00:35:26.542 filename=/dev/nvme0n4 00:35:26.542 Could not set queue depth (nvme0n1) 00:35:26.542 Could not set queue depth (nvme0n2) 00:35:26.542 Could not set queue depth (nvme0n3) 00:35:26.542 Could not set queue depth (nvme0n4) 00:35:26.542 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:26.542 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:26.542 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:26.542 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:26.542 fio-3.35 00:35:26.542 Starting 4 threads 00:35:29.896 18:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:29.896 18:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:29.896 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10764288, buflen=4096 00:35:29.896 fio: pid=770273, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:29.896 18:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:29.896 18:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:29.896 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=8278016, buflen=4096 00:35:29.896 fio: pid=770272, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:30.154 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49344512, buflen=4096 00:35:30.154 fio: pid=770270, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:30.154 18:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:30.154 18:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:30.412 18:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:30.412 18:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:30.670 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=425984, buflen=4096 00:35:30.670 fio: pid=770271, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:30.670 00:35:30.670 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770270: Tue Nov 26 18:31:18 2024 00:35:30.670 read: IOPS=3439, BW=13.4MiB/s (14.1MB/s)(47.1MiB/3503msec) 00:35:30.670 slat (usec): min=5, max=27613, avg=14.14, stdev=302.31 00:35:30.670 clat (usec): min=196, max=41985, avg=272.08, stdev=785.93 00:35:30.670 lat (usec): min=202, max=41998, avg=286.22, stdev=842.41 00:35:30.670 clat percentiles (usec): 00:35:30.670 | 1.00th=[ 208], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:35:30.670 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:35:30.670 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:35:30.670 | 99.00th=[ 482], 99.50th=[ 545], 99.90th=[ 816], 99.95th=[ 1123], 00:35:30.670 | 99.99th=[42206] 00:35:30.670 bw ( KiB/s): min=10432, max=15152, per=77.65%, avg=13646.67, stdev=1932.81, samples=6 00:35:30.670 iops : min= 2608, max= 3788, avg=3411.67, stdev=483.20, samples=6 00:35:30.670 lat (usec) : 250=51.34%, 500=47.84%, 750=0.70%, 1000=0.04% 00:35:30.670 lat (msec) : 2=0.02%, 50=0.04% 00:35:30.670 cpu : usr=1.91%, sys=5.17%, ctx=12055, majf=0, minf=1 00:35:30.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.670 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.670 issued rwts: total=12048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:30.670 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770271: Tue Nov 26 18:31:18 2024 00:35:30.670 read: IOPS=27, BW=109KiB/s (111kB/s)(416KiB/3824msec) 00:35:30.670 slat (usec): min=6, max=7971, avg=168.60, stdev=1087.35 00:35:30.670 clat (usec): min=212, max=42023, avg=36298.05, stdev=13064.59 00:35:30.670 lat (usec): min=227, max=49039, avg=36468.12, stdev=13166.59 00:35:30.670 clat percentiles (usec): 00:35:30.670 | 1.00th=[ 219], 5.00th=[ 239], 10.00th=[ 449], 20.00th=[40633], 00:35:30.670 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:30.670 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:30.670 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:30.670 | 99.99th=[42206] 00:35:30.670 bw ( KiB/s): min= 96, max= 128, per=0.62%, avg=109.57, stdev=12.70, samples=7 00:35:30.670 iops : min= 24, max= 32, avg=27.29, stdev= 3.09, samples=7 00:35:30.670 lat (usec) : 250=5.71%, 500=5.71% 00:35:30.670 lat (msec) : 50=87.62% 00:35:30.670 cpu : usr=0.00%, sys=0.08%, ctx=110, majf=0, minf=2 00:35:30.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.670 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.670 issued rwts: total=105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:30.670 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770272: Tue Nov 26 18:31:18 2024 00:35:30.670 read: IOPS=630, BW=2521KiB/s (2581kB/s)(8084KiB/3207msec) 00:35:30.670 slat (usec): min=5, max=11709, avg=16.90, stdev=295.02 00:35:30.670 clat (usec): min=201, max=42056, avg=1555.61, stdev=7206.92 00:35:30.670 lat (usec): min=209, max=42075, avg=1572.51, stdev=7213.46 00:35:30.670 clat percentiles (usec): 00:35:30.670 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 221], 20.00th=[ 227], 00:35:30.670 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 255], 00:35:30.670 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 330], 95.00th=[ 553], 00:35:30.670 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:30.670 | 99.99th=[42206] 00:35:30.670 bw ( KiB/s): min= 96, max= 9680, per=9.64%, avg=1694.67, stdev=3912.00, samples=6 00:35:30.670 iops : min= 24, max= 2420, avg=423.67, stdev=978.00, samples=6 00:35:30.670 lat (usec) : 250=53.66%, 500=40.41%, 750=2.77% 00:35:30.670 lat (msec) : 50=3.12% 00:35:30.670 cpu : usr=0.41%, sys=0.69%, ctx=2025, majf=0, minf=1 00:35:30.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.670 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.670 issued rwts: total=2022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:30.670 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=770273: Tue Nov 26 18:31:18 2024 00:35:30.670 read: IOPS=900, BW=3600KiB/s (3686kB/s)(10.3MiB/2920msec) 00:35:30.670 slat (nsec): min=4136, max=40801, avg=6793.50, stdev=3195.56 00:35:30.670 clat (usec): min=207, max=42161, avg=1091.40, stdev=5806.12 00:35:30.670 lat (usec): min=213, max=42177, avg=1098.19, stdev=5808.07 00:35:30.670 clat percentiles (usec): 00:35:30.670 | 1.00th=[ 223], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:35:30.670 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 262], 00:35:30.670 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 318], 00:35:30.670 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:30.670 | 99.99th=[42206] 00:35:30.670 bw ( KiB/s): min= 96, max= 5840, per=12.48%, avg=2193.60, stdev=2890.66, samples=5 00:35:30.670 iops : min= 24, max= 1460, avg=548.40, stdev=722.67, samples=5 00:35:30.670 lat (usec) : 250=25.41%, 500=72.31%, 750=0.15%, 1000=0.08% 00:35:30.670 lat (msec) : 20=0.04%, 50=1.98% 00:35:30.670 cpu : usr=0.38%, sys=0.86%, ctx=2630, majf=0, minf=2 00:35:30.670 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.670 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.670 issued rwts: total=2629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.670 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:30.670 00:35:30.670 Run status group 0 (all jobs): 00:35:30.670 READ: bw=17.2MiB/s (18.0MB/s), 109KiB/s-13.4MiB/s (111kB/s-14.1MB/s), io=65.6MiB (68.8MB), run=2920-3824msec 00:35:30.670 00:35:30.670 Disk stats (read/write): 00:35:30.670 nvme0n1: ios=11632/0, merge=0/0, ticks=3872/0, in_queue=3872, util=98.14% 00:35:30.670 nvme0n2: ios=141/0, merge=0/0, ticks=4647/0, in_queue=4647, util=99.06% 00:35:30.670 nvme0n3: ios=1694/0, merge=0/0, ticks=3057/0, in_queue=3057, util=96.26% 00:35:30.670 nvme0n4: ios=2496/0, merge=0/0, ticks=2815/0, in_queue=2815, util=96.71% 00:35:30.928 18:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:30.928 18:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:31.186 18:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:31.186 18:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:31.456 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:31.456 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:31.716 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:31.716 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 770178 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:31.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:31.974 nvmf hotplug test: fio failed as expected 00:35:31.974 18:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:32.537 rmmod nvme_tcp 00:35:32.537 rmmod nvme_fabrics 00:35:32.537 rmmod nvme_keyring 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 768275 ']' 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 768275 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 768275 ']' 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 768275 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 768275 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 768275' 00:35:32.537 killing process with pid 768275 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 768275 00:35:32.537 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 768275 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.795 18:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.699 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:34.699 00:35:34.699 real 0m23.934s 00:35:34.699 user 1m7.927s 00:35:34.699 sys 0m10.295s 00:35:34.699 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.699 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:34.699 ************************************ 00:35:34.699 END TEST nvmf_fio_target 00:35:34.699 ************************************ 00:35:34.699 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:34.699 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:34.699 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.699 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:34.699 ************************************ 00:35:34.699 START TEST nvmf_bdevio 00:35:34.699 ************************************ 00:35:34.699 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:34.958 * Looking for test storage... 00:35:34.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.958 --rc genhtml_branch_coverage=1 00:35:34.958 --rc genhtml_function_coverage=1 00:35:34.958 --rc genhtml_legend=1 00:35:34.958 --rc geninfo_all_blocks=1 00:35:34.958 --rc geninfo_unexecuted_blocks=1 00:35:34.958 00:35:34.958 ' 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.958 --rc genhtml_branch_coverage=1 00:35:34.958 --rc genhtml_function_coverage=1 00:35:34.958 --rc genhtml_legend=1 00:35:34.958 --rc geninfo_all_blocks=1 00:35:34.958 --rc geninfo_unexecuted_blocks=1 00:35:34.958 00:35:34.958 ' 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.958 --rc genhtml_branch_coverage=1 00:35:34.958 --rc genhtml_function_coverage=1 00:35:34.958 --rc genhtml_legend=1 00:35:34.958 --rc geninfo_all_blocks=1 00:35:34.958 --rc geninfo_unexecuted_blocks=1 00:35:34.958 00:35:34.958 ' 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:34.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.958 --rc genhtml_branch_coverage=1 00:35:34.958 --rc genhtml_function_coverage=1 00:35:34.958 --rc genhtml_legend=1 00:35:34.958 --rc geninfo_all_blocks=1 00:35:34.958 --rc geninfo_unexecuted_blocks=1 00:35:34.958 00:35:34.958 ' 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.958 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:34.959 18:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:37.489 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:37.489 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.489 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:37.490 Found net devices under 0000:09:00.0: cvl_0_0 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:37.490 Found net devices under 0000:09:00.1: cvl_0_1 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.490 18:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:37.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:37.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:35:37.490 00:35:37.490 --- 10.0.0.2 ping statistics --- 00:35:37.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.490 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:37.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:37.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:35:37.490 00:35:37.490 --- 10.0.0.1 ping statistics --- 00:35:37.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:37.490 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=773020 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 773020 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 773020 ']' 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:37.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.490 [2024-11-26 18:31:25.158210] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:37.490 [2024-11-26 18:31:25.159270] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:35:37.490 [2024-11-26 18:31:25.159355] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:37.490 [2024-11-26 18:31:25.232500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:37.490 [2024-11-26 18:31:25.291356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:37.490 [2024-11-26 18:31:25.291406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:37.490 [2024-11-26 18:31:25.291435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:37.490 [2024-11-26 18:31:25.291447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:37.490 [2024-11-26 18:31:25.291458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:37.490 [2024-11-26 18:31:25.292970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:37.490 [2024-11-26 18:31:25.293033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:37.490 [2024-11-26 18:31:25.293100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:37.490 [2024-11-26 18:31:25.293103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:37.490 [2024-11-26 18:31:25.384182] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:37.490 [2024-11-26 18:31:25.384412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:37.490 [2024-11-26 18:31:25.384730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:37.490 [2024-11-26 18:31:25.385410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:37.490 [2024-11-26 18:31:25.385678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:37.490 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.491 [2024-11-26 18:31:25.445803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.491 Malloc0 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.491 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.749 [2024-11-26 18:31:25.510011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:37.749 { 00:35:37.749 "params": { 00:35:37.749 "name": "Nvme$subsystem", 00:35:37.749 "trtype": "$TEST_TRANSPORT", 00:35:37.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.749 "adrfam": "ipv4", 00:35:37.749 "trsvcid": "$NVMF_PORT", 00:35:37.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.749 "hdgst": ${hdgst:-false}, 00:35:37.749 "ddgst": ${ddgst:-false} 00:35:37.749 }, 00:35:37.749 "method": "bdev_nvme_attach_controller" 00:35:37.749 } 00:35:37.749 EOF 00:35:37.749 )") 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:37.749 18:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:37.749 "params": { 00:35:37.749 "name": "Nvme1", 00:35:37.749 "trtype": "tcp", 00:35:37.749 "traddr": "10.0.0.2", 00:35:37.749 "adrfam": "ipv4", 00:35:37.749 "trsvcid": "4420", 00:35:37.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.749 "hdgst": false, 00:35:37.749 "ddgst": false 00:35:37.749 }, 00:35:37.749 "method": "bdev_nvme_attach_controller" 00:35:37.749 }' 00:35:37.749 [2024-11-26 18:31:25.557895] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:35:37.749 [2024-11-26 18:31:25.557961] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773048 ] 00:35:37.749 [2024-11-26 18:31:25.626618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:37.749 [2024-11-26 18:31:25.692340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.749 [2024-11-26 18:31:25.692394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:37.749 [2024-11-26 18:31:25.692398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:38.316 I/O targets: 00:35:38.316 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:38.316 00:35:38.316 00:35:38.316 CUnit - A unit testing framework for C - Version 2.1-3 00:35:38.316 http://cunit.sourceforge.net/ 00:35:38.316 00:35:38.316 00:35:38.316 Suite: bdevio tests on: Nvme1n1 00:35:38.316 Test: blockdev write read block ...passed 00:35:38.316 Test: blockdev write zeroes read block ...passed 00:35:38.316 Test: blockdev write zeroes read no split ...passed 00:35:38.316 Test: blockdev write zeroes read split ...passed 00:35:38.316 Test: blockdev write zeroes read split partial ...passed 00:35:38.316 Test: blockdev reset ...[2024-11-26 18:31:26.230214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:38.316 [2024-11-26 18:31:26.230327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d95cb0 (9): Bad file descriptor 00:35:38.316 [2024-11-26 18:31:26.234561] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:38.316 passed 00:35:38.316 Test: blockdev write read 8 blocks ...passed 00:35:38.316 Test: blockdev write read size > 128k ...passed 00:35:38.316 Test: blockdev write read invalid size ...passed 00:35:38.316 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:38.316 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:38.316 Test: blockdev write read max offset ...passed 00:35:38.575 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:38.575 Test: blockdev writev readv 8 blocks ...passed 00:35:38.575 Test: blockdev writev readv 30 x 1block ...passed 00:35:38.575 Test: blockdev writev readv block ...passed 00:35:38.575 Test: blockdev writev readv size > 128k ...passed 00:35:38.575 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:38.575 Test: blockdev comparev and writev ...[2024-11-26 18:31:26.406518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:38.575 [2024-11-26 18:31:26.406554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.406580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:38.575 [2024-11-26 18:31:26.406598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.406974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:38.575 [2024-11-26 18:31:26.407000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.407022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:38.575 [2024-11-26 18:31:26.407039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.407413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:38.575 [2024-11-26 18:31:26.407438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.407461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:38.575 [2024-11-26 18:31:26.407477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.407837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:38.575 [2024-11-26 18:31:26.407862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.407884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:38.575 [2024-11-26 18:31:26.407900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:38.575 passed 00:35:38.575 Test: blockdev nvme passthru rw ...passed 00:35:38.575 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:31:26.490561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:38.575 [2024-11-26 18:31:26.490591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.490758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:38.575 [2024-11-26 18:31:26.490783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.490936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:38.575 [2024-11-26 18:31:26.490961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:38.575 [2024-11-26 18:31:26.491115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:38.575 [2024-11-26 18:31:26.491140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:38.575 passed 00:35:38.575 Test: blockdev nvme admin passthru ...passed 00:35:38.575 Test: blockdev copy ...passed 00:35:38.575 00:35:38.575 Run Summary: Type Total Ran Passed Failed Inactive 00:35:38.575 suites 1 1 n/a 0 0 00:35:38.575 tests 23 23 23 0 0 00:35:38.575 asserts 152 152 152 0 n/a 00:35:38.575 00:35:38.575 Elapsed time = 1.025 seconds 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:38.833 rmmod nvme_tcp 00:35:38.833 rmmod nvme_fabrics 00:35:38.833 rmmod nvme_keyring 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 773020 ']' 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 773020 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 773020 ']' 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 773020 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:38.833 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773020 00:35:39.092 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:39.092 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:39.092 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773020' 00:35:39.092 killing process with pid 773020 00:35:39.092 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 773020 00:35:39.092 18:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 773020 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:39.092 18:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.628 18:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:41.628 00:35:41.628 real 0m6.448s 00:35:41.628 user 0m8.842s 00:35:41.628 sys 0m2.563s 00:35:41.628 18:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:41.628 18:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:41.628 ************************************ 00:35:41.628 END TEST nvmf_bdevio 00:35:41.628 ************************************ 00:35:41.628 18:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:41.628 00:35:41.628 real 3m56.155s 00:35:41.628 user 8m58.304s 00:35:41.628 sys 1m24.087s 00:35:41.628 18:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:41.628 18:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:41.628 ************************************ 00:35:41.628 END TEST nvmf_target_core_interrupt_mode 00:35:41.628 ************************************ 00:35:41.628 18:31:29 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:41.628 18:31:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:41.628 18:31:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:41.628 18:31:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:41.628 ************************************ 00:35:41.628 START TEST nvmf_interrupt 00:35:41.628 ************************************ 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:41.628 * Looking for test storage... 00:35:41.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:41.628 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:41.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.629 --rc genhtml_branch_coverage=1 00:35:41.629 --rc genhtml_function_coverage=1 00:35:41.629 --rc genhtml_legend=1 00:35:41.629 --rc geninfo_all_blocks=1 00:35:41.629 --rc geninfo_unexecuted_blocks=1 00:35:41.629 00:35:41.629 ' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:41.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.629 --rc genhtml_branch_coverage=1 00:35:41.629 --rc genhtml_function_coverage=1 00:35:41.629 --rc genhtml_legend=1 00:35:41.629 --rc geninfo_all_blocks=1 00:35:41.629 --rc geninfo_unexecuted_blocks=1 00:35:41.629 00:35:41.629 ' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:41.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.629 --rc genhtml_branch_coverage=1 00:35:41.629 --rc genhtml_function_coverage=1 00:35:41.629 --rc genhtml_legend=1 00:35:41.629 --rc geninfo_all_blocks=1 00:35:41.629 --rc geninfo_unexecuted_blocks=1 00:35:41.629 00:35:41.629 ' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:41.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:41.629 --rc genhtml_branch_coverage=1 00:35:41.629 --rc genhtml_function_coverage=1 00:35:41.629 --rc genhtml_legend=1 00:35:41.629 --rc geninfo_all_blocks=1 00:35:41.629 --rc geninfo_unexecuted_blocks=1 00:35:41.629 00:35:41.629 ' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:41.629 18:31:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:43.530 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:43.531 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:43.531 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:43.531 Found net devices under 0000:09:00.0: cvl_0_0 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:43.531 Found net devices under 0000:09:00.1: cvl_0_1 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:43.531 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:43.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:43.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:35:43.790 00:35:43.790 --- 10.0.0.2 ping statistics --- 00:35:43.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.790 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:43.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:43.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:35:43.790 00:35:43.790 --- 10.0.0.1 ping statistics --- 00:35:43.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:43.790 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=775138 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 775138 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 775138 ']' 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:43.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.790 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:43.790 [2024-11-26 18:31:31.631644] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:43.790 [2024-11-26 18:31:31.632882] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:35:43.790 [2024-11-26 18:31:31.632949] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:43.790 [2024-11-26 18:31:31.707199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:43.790 [2024-11-26 18:31:31.763961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:43.790 [2024-11-26 18:31:31.764013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:43.790 [2024-11-26 18:31:31.764040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:43.790 [2024-11-26 18:31:31.764050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:43.790 [2024-11-26 18:31:31.764060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:43.790 [2024-11-26 18:31:31.765400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.790 [2024-11-26 18:31:31.765406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:44.050 [2024-11-26 18:31:31.851938] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:44.050 [2024-11-26 18:31:31.851994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:44.050 [2024-11-26 18:31:31.852204] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:44.050 5000+0 records in 00:35:44.050 5000+0 records out 00:35:44.050 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0138896 s, 737 MB/s 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.050 AIO0 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.050 [2024-11-26 18:31:31.950020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:44.050 [2024-11-26 18:31:31.978334] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 775138 0 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 775138 0 idle 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=775138 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:44.050 18:31:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775138 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.27 reactor_0' 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775138 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.27 reactor_0 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 775138 1 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 775138 1 idle 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=775138 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775205 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775205 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:44.308 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:44.566 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:44.566 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:44.566 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=775302 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 775138 0 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 775138 0 busy 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=775138 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775138 root 20 0 128.2g 48000 34944 S 6.7 0.1 0:00.28 reactor_0' 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775138 root 20 0 128.2g 48000 34944 S 6.7 0.1 0:00.28 reactor_0 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:44.567 18:31:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:45.500 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:45.500 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.500 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:45.500 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775138 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.55 reactor_0' 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775138 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.55 reactor_0 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 775138 1 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 775138 1 busy 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=775138 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:45.760 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775205 root 20 0 128.2g 48384 34944 R 93.3 0.1 0:01.30 reactor_1' 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775205 root 20 0 128.2g 48384 34944 R 93.3 0.1 0:01.30 reactor_1 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.018 18:31:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 775302 00:35:55.990 Initializing NVMe Controllers 00:35:55.990 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:55.990 Controller IO queue size 256, less than required. 00:35:55.990 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:55.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:55.990 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:55.990 Initialization complete. Launching workers. 00:35:55.990 ======================================================== 00:35:55.990 Latency(us) 00:35:55.990 Device Information : IOPS MiB/s Average min max 00:35:55.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13736.37 53.66 18650.32 3984.57 23200.11 00:35:55.990 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13343.97 52.12 19198.22 4556.14 24007.31 00:35:55.990 ======================================================== 00:35:55.990 Total : 27080.33 105.78 18920.30 3984.57 24007.31 00:35:55.990 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 775138 0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 775138 0 idle 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=775138 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775138 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0' 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775138 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 775138 1 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 775138 1 idle 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=775138 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775205 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1' 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775205 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:55.990 18:31:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:55.990 18:31:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:55.990 18:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:55.990 18:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:55.990 18:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:55.990 18:31:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:57.428 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 775138 0 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 775138 0 idle 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=775138 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775138 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.32 reactor_0' 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775138 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.32 reactor_0 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 775138 1 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 775138 1 idle 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=775138 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 775138 -w 256 00:35:57.429 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:57.687 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 775205 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:35:57.687 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 775205 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:35:57.687 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:57.687 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:57.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:57.688 rmmod nvme_tcp 00:35:57.688 rmmod nvme_fabrics 00:35:57.688 rmmod nvme_keyring 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 775138 ']' 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 775138 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 775138 ']' 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 775138 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:57.688 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775138 00:35:57.946 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:57.946 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:57.946 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775138' 00:35:57.946 killing process with pid 775138 00:35:57.946 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 775138 00:35:57.946 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 775138 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:58.204 18:31:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:00.106 18:31:48 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:00.106 00:36:00.106 real 0m18.818s 00:36:00.106 user 0m36.772s 00:36:00.106 sys 0m6.664s 00:36:00.106 18:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.106 18:31:48 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:00.106 ************************************ 00:36:00.106 END TEST nvmf_interrupt 00:36:00.106 ************************************ 00:36:00.106 00:36:00.106 real 25m0.226s 00:36:00.106 user 58m28.124s 00:36:00.106 sys 6m40.780s 00:36:00.106 18:31:48 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:00.106 18:31:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.106 ************************************ 00:36:00.106 END TEST nvmf_tcp 00:36:00.106 ************************************ 00:36:00.106 18:31:48 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:00.106 18:31:48 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:00.106 18:31:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:00.106 18:31:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:00.106 18:31:48 -- common/autotest_common.sh@10 -- # set +x 00:36:00.106 ************************************ 00:36:00.106 START TEST spdkcli_nvmf_tcp 00:36:00.106 ************************************ 00:36:00.106 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:00.365 * Looking for test storage... 00:36:00.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.365 --rc genhtml_branch_coverage=1 00:36:00.365 --rc genhtml_function_coverage=1 00:36:00.365 --rc genhtml_legend=1 00:36:00.365 --rc geninfo_all_blocks=1 00:36:00.365 --rc geninfo_unexecuted_blocks=1 00:36:00.365 00:36:00.365 ' 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.365 --rc genhtml_branch_coverage=1 00:36:00.365 --rc genhtml_function_coverage=1 00:36:00.365 --rc genhtml_legend=1 00:36:00.365 --rc geninfo_all_blocks=1 00:36:00.365 --rc geninfo_unexecuted_blocks=1 00:36:00.365 00:36:00.365 ' 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:00.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.365 --rc genhtml_branch_coverage=1 00:36:00.365 --rc genhtml_function_coverage=1 00:36:00.365 --rc genhtml_legend=1 00:36:00.365 --rc geninfo_all_blocks=1 00:36:00.365 --rc geninfo_unexecuted_blocks=1 00:36:00.365 00:36:00.365 ' 00:36:00.365 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:00.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:00.366 --rc genhtml_branch_coverage=1 00:36:00.366 --rc genhtml_function_coverage=1 00:36:00.366 --rc genhtml_legend=1 00:36:00.366 --rc geninfo_all_blocks=1 00:36:00.366 --rc geninfo_unexecuted_blocks=1 00:36:00.366 00:36:00.366 ' 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:00.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=777314 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 777314 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 777314 ']' 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:00.366 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.366 [2024-11-26 18:31:48.311743] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:36:00.366 [2024-11-26 18:31:48.311848] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777314 ] 00:36:00.625 [2024-11-26 18:31:48.379888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:00.625 [2024-11-26 18:31:48.437168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.625 [2024-11-26 18:31:48.437174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:00.625 18:31:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:00.625 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:00.625 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:00.625 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:00.625 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:00.625 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:00.625 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:00.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:00.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:00.625 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:00.625 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:00.625 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:00.625 ' 00:36:03.906 [2024-11-26 18:31:51.320498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.838 [2024-11-26 18:31:52.649114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:07.364 [2024-11-26 18:31:55.112322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:09.261 [2024-11-26 18:31:57.230954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:11.159 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:11.159 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:11.159 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:11.159 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:11.159 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:11.159 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:11.159 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:11.159 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:11.159 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:11.159 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:11.159 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:11.159 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:11.159 18:31:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:11.159 18:31:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.159 18:31:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.159 18:31:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:11.159 18:31:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:11.159 18:31:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.159 18:31:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:11.159 18:31:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:11.725 18:31:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:11.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:11.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:11.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:11.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:11.725 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:11.725 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:11.725 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:11.725 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:11.725 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:11.725 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:11.725 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:11.725 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:11.725 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:11.725 ' 00:36:16.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:16.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:16.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:16.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:16.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:16.987 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:16.987 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:16.987 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:16.987 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:16.987 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:16.987 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:16.987 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:16.987 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:16.987 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 777314 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 777314 ']' 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 777314 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:16.987 18:32:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 777314 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 777314' 00:36:17.248 killing process with pid 777314 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 777314 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 777314 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 777314 ']' 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 777314 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 777314 ']' 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 777314 00:36:17.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (777314) - No such process 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 777314 is not found' 00:36:17.248 Process with pid 777314 is not found 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:17.248 00:36:17.248 real 0m17.149s 00:36:17.248 user 0m37.078s 00:36:17.248 sys 0m0.813s 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:17.248 18:32:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:17.248 ************************************ 00:36:17.248 END TEST spdkcli_nvmf_tcp 00:36:17.248 ************************************ 00:36:17.508 18:32:05 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:17.508 18:32:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:17.508 18:32:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:17.508 18:32:05 -- common/autotest_common.sh@10 -- # set +x 00:36:17.508 ************************************ 00:36:17.508 START TEST nvmf_identify_passthru 00:36:17.508 ************************************ 00:36:17.508 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:17.508 * Looking for test storage... 00:36:17.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:17.508 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:17.508 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:17.508 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:17.508 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:17.508 18:32:05 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:17.509 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:17.509 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:17.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.509 --rc genhtml_branch_coverage=1 00:36:17.509 --rc genhtml_function_coverage=1 00:36:17.509 --rc genhtml_legend=1 00:36:17.509 --rc geninfo_all_blocks=1 00:36:17.509 --rc geninfo_unexecuted_blocks=1 00:36:17.509 00:36:17.509 ' 00:36:17.509 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:17.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.509 --rc genhtml_branch_coverage=1 00:36:17.509 --rc genhtml_function_coverage=1 00:36:17.509 --rc genhtml_legend=1 00:36:17.509 --rc geninfo_all_blocks=1 00:36:17.509 --rc geninfo_unexecuted_blocks=1 00:36:17.509 00:36:17.509 ' 00:36:17.509 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:17.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.509 --rc genhtml_branch_coverage=1 00:36:17.509 --rc genhtml_function_coverage=1 00:36:17.509 --rc genhtml_legend=1 00:36:17.509 --rc geninfo_all_blocks=1 00:36:17.509 --rc geninfo_unexecuted_blocks=1 00:36:17.509 00:36:17.509 ' 00:36:17.509 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:17.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:17.509 --rc genhtml_branch_coverage=1 00:36:17.509 --rc genhtml_function_coverage=1 00:36:17.509 --rc genhtml_legend=1 00:36:17.509 --rc geninfo_all_blocks=1 00:36:17.509 --rc geninfo_unexecuted_blocks=1 00:36:17.509 00:36:17.509 ' 00:36:17.509 18:32:05 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:17.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:17.509 18:32:05 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:17.509 18:32:05 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:17.509 18:32:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:17.509 18:32:05 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.509 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:17.509 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:17.509 18:32:05 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:17.509 18:32:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:20.040 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:20.040 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:20.040 Found net devices under 0000:09:00.0: cvl_0_0 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:20.040 Found net devices under 0000:09:00.1: cvl_0_1 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:20.040 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:20.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:20.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:36:20.040 00:36:20.040 --- 10.0.0.2 ping statistics --- 00:36:20.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.041 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:20.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:20.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:36:20.041 00:36:20.041 --- 10.0.0.1 ping statistics --- 00:36:20.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:20.041 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:20.041 18:32:07 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:20.041 18:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:20.041 18:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:36:20.041 18:32:07 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:0b:00.0 00:36:20.041 18:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:36:20.041 18:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:36:20.041 18:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:36:20.041 18:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:20.041 18:32:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:24.226 18:32:12 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:36:24.226 18:32:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:36:24.226 18:32:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:24.226 18:32:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:28.459 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:28.459 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=782035 00:36:28.459 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:28.459 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:28.459 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 782035 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 782035 ']' 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.459 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.459 [2024-11-26 18:32:16.296481] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:36:28.459 [2024-11-26 18:32:16.296561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.459 [2024-11-26 18:32:16.371031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:28.459 [2024-11-26 18:32:16.429618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.459 [2024-11-26 18:32:16.429669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.459 [2024-11-26 18:32:16.429682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.459 [2024-11-26 18:32:16.429693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.459 [2024-11-26 18:32:16.429702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.459 [2024-11-26 18:32:16.431151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.459 [2024-11-26 18:32:16.431206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:28.459 [2024-11-26 18:32:16.431273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:28.459 [2024-11-26 18:32:16.431277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:28.719 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.719 INFO: Log level set to 20 00:36:28.719 INFO: Requests: 00:36:28.719 { 00:36:28.719 "jsonrpc": "2.0", 00:36:28.719 "method": "nvmf_set_config", 00:36:28.719 "id": 1, 00:36:28.719 "params": { 00:36:28.719 "admin_cmd_passthru": { 00:36:28.719 "identify_ctrlr": true 00:36:28.719 } 00:36:28.719 } 00:36:28.719 } 00:36:28.719 00:36:28.719 INFO: response: 00:36:28.719 { 00:36:28.719 "jsonrpc": "2.0", 00:36:28.719 "id": 1, 00:36:28.719 "result": true 00:36:28.719 } 00:36:28.719 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.719 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.719 INFO: Setting log level to 20 00:36:28.719 INFO: Setting log level to 20 00:36:28.719 INFO: Log level set to 20 00:36:28.719 INFO: Log level set to 20 00:36:28.719 INFO: Requests: 00:36:28.719 { 00:36:28.719 "jsonrpc": "2.0", 00:36:28.719 "method": "framework_start_init", 00:36:28.719 "id": 1 00:36:28.719 } 00:36:28.719 00:36:28.719 INFO: Requests: 00:36:28.719 { 00:36:28.719 "jsonrpc": "2.0", 00:36:28.719 "method": "framework_start_init", 00:36:28.719 "id": 1 00:36:28.719 } 00:36:28.719 00:36:28.719 [2024-11-26 18:32:16.639518] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:28.719 INFO: response: 00:36:28.719 { 00:36:28.719 "jsonrpc": "2.0", 00:36:28.719 "id": 1, 00:36:28.719 "result": true 00:36:28.719 } 00:36:28.719 00:36:28.719 INFO: response: 00:36:28.719 { 00:36:28.719 "jsonrpc": "2.0", 00:36:28.719 "id": 1, 00:36:28.719 "result": true 00:36:28.719 } 00:36:28.719 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.719 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.719 INFO: Setting log level to 40 00:36:28.719 INFO: Setting log level to 40 00:36:28.719 INFO: Setting log level to 40 00:36:28.719 [2024-11-26 18:32:16.649545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.719 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.719 18:32:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.719 18:32:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.999 Nvme0n1 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.999 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.999 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.999 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.999 [2024-11-26 18:32:19.545680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.999 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.999 [ 00:36:31.999 { 00:36:31.999 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:31.999 "subtype": "Discovery", 00:36:31.999 "listen_addresses": [], 00:36:31.999 "allow_any_host": true, 00:36:31.999 "hosts": [] 00:36:31.999 }, 00:36:31.999 { 00:36:31.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:31.999 "subtype": "NVMe", 00:36:31.999 "listen_addresses": [ 00:36:31.999 { 00:36:31.999 "trtype": "TCP", 00:36:31.999 "adrfam": "IPv4", 00:36:31.999 "traddr": "10.0.0.2", 00:36:31.999 "trsvcid": "4420" 00:36:31.999 } 00:36:31.999 ], 00:36:31.999 "allow_any_host": true, 00:36:31.999 "hosts": [], 00:36:31.999 "serial_number": "SPDK00000000000001", 00:36:31.999 "model_number": "SPDK bdev Controller", 00:36:31.999 "max_namespaces": 1, 00:36:31.999 "min_cntlid": 1, 00:36:31.999 "max_cntlid": 65519, 00:36:31.999 "namespaces": [ 00:36:31.999 { 00:36:31.999 "nsid": 1, 00:36:31.999 "bdev_name": "Nvme0n1", 00:36:31.999 "name": "Nvme0n1", 00:36:31.999 "nguid": "F25F5A53BADF47D48C3C2346D53A1FD1", 00:36:31.999 "uuid": "f25f5a53-badf-47d4-8c3c-2346d53a1fd1" 00:36:31.999 } 00:36:31.999 ] 00:36:31.999 } 00:36:31.999 ] 00:36:31.999 18:32:19 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.999 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:31.999 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:32.000 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:32.000 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:36:32.000 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:32.000 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:32.000 18:32:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:32.257 18:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:32.257 18:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:36:32.257 18:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:32.257 18:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.257 18:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:32.257 18:32:20 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.257 rmmod nvme_tcp 00:36:32.257 rmmod nvme_fabrics 00:36:32.257 rmmod nvme_keyring 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 782035 ']' 00:36:32.257 18:32:20 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 782035 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 782035 ']' 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 782035 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 782035 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:32.257 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 782035' 00:36:32.257 killing process with pid 782035 00:36:32.258 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 782035 00:36:32.258 18:32:20 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 782035 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:34.159 18:32:21 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.159 18:32:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:34.159 18:32:21 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.064 18:32:23 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:36.064 00:36:36.064 real 0m18.545s 00:36:36.064 user 0m27.207s 00:36:36.064 sys 0m3.274s 00:36:36.064 18:32:23 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.064 18:32:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:36.064 ************************************ 00:36:36.064 END TEST nvmf_identify_passthru 00:36:36.064 ************************************ 00:36:36.064 18:32:23 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:36.064 18:32:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:36.064 18:32:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.064 18:32:23 -- common/autotest_common.sh@10 -- # set +x 00:36:36.064 ************************************ 00:36:36.064 START TEST nvmf_dif 00:36:36.064 ************************************ 00:36:36.064 18:32:23 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:36.064 * Looking for test storage... 00:36:36.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:36.064 18:32:23 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:36.064 18:32:23 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:36.064 18:32:23 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:36.064 18:32:24 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:36.064 18:32:24 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:36.064 18:32:24 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:36.064 18:32:24 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.064 --rc genhtml_branch_coverage=1 00:36:36.064 --rc genhtml_function_coverage=1 00:36:36.064 --rc genhtml_legend=1 00:36:36.064 --rc geninfo_all_blocks=1 00:36:36.064 --rc geninfo_unexecuted_blocks=1 00:36:36.064 00:36:36.064 ' 00:36:36.064 18:32:24 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.064 --rc genhtml_branch_coverage=1 00:36:36.064 --rc genhtml_function_coverage=1 00:36:36.064 --rc genhtml_legend=1 00:36:36.064 --rc geninfo_all_blocks=1 00:36:36.064 --rc geninfo_unexecuted_blocks=1 00:36:36.064 00:36:36.064 ' 00:36:36.064 18:32:24 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.064 --rc genhtml_branch_coverage=1 00:36:36.064 --rc genhtml_function_coverage=1 00:36:36.064 --rc genhtml_legend=1 00:36:36.064 --rc geninfo_all_blocks=1 00:36:36.064 --rc geninfo_unexecuted_blocks=1 00:36:36.064 00:36:36.064 ' 00:36:36.064 18:32:24 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:36.064 --rc genhtml_branch_coverage=1 00:36:36.064 --rc genhtml_function_coverage=1 00:36:36.064 --rc genhtml_legend=1 00:36:36.064 --rc geninfo_all_blocks=1 00:36:36.064 --rc geninfo_unexecuted_blocks=1 00:36:36.064 00:36:36.064 ' 00:36:36.064 18:32:24 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:36.064 18:32:24 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:36.065 18:32:24 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:36.065 18:32:24 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:36.065 18:32:24 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:36.065 18:32:24 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:36.065 18:32:24 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.065 18:32:24 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.065 18:32:24 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.065 18:32:24 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:36.065 18:32:24 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:36.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:36.065 18:32:24 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:36.065 18:32:24 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:36.065 18:32:24 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:36.065 18:32:24 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:36.065 18:32:24 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:36.065 18:32:24 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.065 18:32:24 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:36.065 18:32:24 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.323 18:32:24 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:36.323 18:32:24 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:36.323 18:32:24 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:36.323 18:32:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:38.225 18:32:26 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:38.485 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:38.485 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:38.485 Found net devices under 0000:09:00.0: cvl_0_0 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:38.485 Found net devices under 0000:09:00.1: cvl_0_1 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:38.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:38.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:36:38.485 00:36:38.485 --- 10.0.0.2 ping statistics --- 00:36:38.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.485 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:38.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:38.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:36:38.485 00:36:38.485 --- 10.0.0.1 ping statistics --- 00:36:38.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.485 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:38.485 18:32:26 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:39.860 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:39.860 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:39.860 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:39.860 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:39.860 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:39.860 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:39.860 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:39.860 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:39.860 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:36:39.860 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:39.860 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:36:39.860 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:36:39.860 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:36:39.860 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:36:39.860 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:36:39.860 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:36:39.860 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:39.860 18:32:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:39.860 18:32:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:39.860 18:32:27 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.860 18:32:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=785344 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:39.860 18:32:27 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 785344 00:36:39.860 18:32:27 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 785344 ']' 00:36:39.860 18:32:27 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.860 18:32:27 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.860 18:32:27 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.860 18:32:27 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.860 18:32:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:39.860 [2024-11-26 18:32:27.779415] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:36:39.860 [2024-11-26 18:32:27.779506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:39.860 [2024-11-26 18:32:27.851529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.119 [2024-11-26 18:32:27.906814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:40.119 [2024-11-26 18:32:27.906870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:40.119 [2024-11-26 18:32:27.906893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:40.119 [2024-11-26 18:32:27.906904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:40.119 [2024-11-26 18:32:27.906914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:40.119 [2024-11-26 18:32:27.907473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:40.119 18:32:28 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.119 18:32:28 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:40.119 18:32:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:40.119 18:32:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.119 [2024-11-26 18:32:28.051420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.119 18:32:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.119 18:32:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:40.119 ************************************ 00:36:40.119 START TEST fio_dif_1_default 00:36:40.119 ************************************ 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.119 bdev_null0 00:36:40.119 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:40.120 [2024-11-26 18:32:28.107759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:40.120 { 00:36:40.120 "params": { 00:36:40.120 "name": "Nvme$subsystem", 00:36:40.120 "trtype": "$TEST_TRANSPORT", 00:36:40.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:40.120 "adrfam": "ipv4", 00:36:40.120 "trsvcid": "$NVMF_PORT", 00:36:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:40.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:40.120 "hdgst": ${hdgst:-false}, 00:36:40.120 "ddgst": ${ddgst:-false} 00:36:40.120 }, 00:36:40.120 "method": "bdev_nvme_attach_controller" 00:36:40.120 } 00:36:40.120 EOF 00:36:40.120 )") 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:40.120 18:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:40.120 "params": { 00:36:40.120 "name": "Nvme0", 00:36:40.120 "trtype": "tcp", 00:36:40.120 "traddr": "10.0.0.2", 00:36:40.120 "adrfam": "ipv4", 00:36:40.120 "trsvcid": "4420", 00:36:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.120 "hdgst": false, 00:36:40.120 "ddgst": false 00:36:40.120 }, 00:36:40.120 "method": "bdev_nvme_attach_controller" 00:36:40.120 }' 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:40.378 18:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:40.378 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:40.378 fio-3.35 00:36:40.378 Starting 1 thread 00:36:52.578 00:36:52.578 filename0: (groupid=0, jobs=1): err= 0: pid=785573: Tue Nov 26 18:32:39 2024 00:36:52.578 read: IOPS=99, BW=398KiB/s (407kB/s)(3984KiB/10013msec) 00:36:52.578 slat (usec): min=6, max=127, avg= 8.79, stdev= 4.71 00:36:52.578 clat (usec): min=580, max=44900, avg=40185.10, stdev=5667.72 00:36:52.578 lat (usec): min=587, max=44944, avg=40193.89, stdev=5666.67 00:36:52.578 clat percentiles (usec): 00:36:52.578 | 1.00th=[ 652], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:52.578 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:52.578 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:52.578 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:36:52.578 | 99.99th=[44827] 00:36:52.578 bw ( KiB/s): min= 384, max= 448, per=99.53%, avg=396.80, stdev=19.14, samples=20 00:36:52.578 iops : min= 96, max= 112, avg=99.20, stdev= 4.79, samples=20 00:36:52.578 lat (usec) : 750=2.01% 00:36:52.578 lat (msec) : 50=97.99% 00:36:52.578 cpu : usr=90.71%, sys=9.01%, ctx=14, majf=0, minf=301 00:36:52.578 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:52.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:52.578 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:52.578 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:52.578 00:36:52.578 Run status group 0 (all jobs): 00:36:52.578 READ: bw=398KiB/s (407kB/s), 398KiB/s-398KiB/s (407kB/s-407kB/s), io=3984KiB (4080kB), run=10013-10013msec 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.578 00:36:52.578 real 0m11.182s 00:36:52.578 user 0m10.415s 00:36:52.578 sys 0m1.185s 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:52.578 ************************************ 00:36:52.578 END TEST fio_dif_1_default 00:36:52.578 ************************************ 00:36:52.578 18:32:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:52.578 18:32:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:52.578 18:32:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:52.578 18:32:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:52.578 ************************************ 00:36:52.578 START TEST fio_dif_1_multi_subsystems 00:36:52.578 ************************************ 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:52.578 bdev_null0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:52.578 [2024-11-26 18:32:39.329138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:52.578 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:52.579 bdev_null1 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:52.579 { 00:36:52.579 "params": { 00:36:52.579 "name": "Nvme$subsystem", 00:36:52.579 "trtype": "$TEST_TRANSPORT", 00:36:52.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:52.579 "adrfam": "ipv4", 00:36:52.579 "trsvcid": "$NVMF_PORT", 00:36:52.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:52.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:52.579 "hdgst": ${hdgst:-false}, 00:36:52.579 "ddgst": ${ddgst:-false} 00:36:52.579 }, 00:36:52.579 "method": "bdev_nvme_attach_controller" 00:36:52.579 } 00:36:52.579 EOF 00:36:52.579 )") 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:52.579 { 00:36:52.579 "params": { 00:36:52.579 "name": "Nvme$subsystem", 00:36:52.579 "trtype": "$TEST_TRANSPORT", 00:36:52.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:52.579 "adrfam": "ipv4", 00:36:52.579 "trsvcid": "$NVMF_PORT", 00:36:52.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:52.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:52.579 "hdgst": ${hdgst:-false}, 00:36:52.579 "ddgst": ${ddgst:-false} 00:36:52.579 }, 00:36:52.579 "method": "bdev_nvme_attach_controller" 00:36:52.579 } 00:36:52.579 EOF 00:36:52.579 )") 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:52.579 "params": { 00:36:52.579 "name": "Nvme0", 00:36:52.579 "trtype": "tcp", 00:36:52.579 "traddr": "10.0.0.2", 00:36:52.579 "adrfam": "ipv4", 00:36:52.579 "trsvcid": "4420", 00:36:52.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:52.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:52.579 "hdgst": false, 00:36:52.579 "ddgst": false 00:36:52.579 }, 00:36:52.579 "method": "bdev_nvme_attach_controller" 00:36:52.579 },{ 00:36:52.579 "params": { 00:36:52.579 "name": "Nvme1", 00:36:52.579 "trtype": "tcp", 00:36:52.579 "traddr": "10.0.0.2", 00:36:52.579 "adrfam": "ipv4", 00:36:52.579 "trsvcid": "4420", 00:36:52.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:52.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:52.579 "hdgst": false, 00:36:52.579 "ddgst": false 00:36:52.579 }, 00:36:52.579 "method": "bdev_nvme_attach_controller" 00:36:52.579 }' 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:52.579 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:52.580 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:52.580 18:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:52.580 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:52.580 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:52.580 fio-3.35 00:36:52.580 Starting 2 threads 00:37:02.547 00:37:02.547 filename0: (groupid=0, jobs=1): err= 0: pid=786980: Tue Nov 26 18:32:50 2024 00:37:02.547 read: IOPS=102, BW=410KiB/s (420kB/s)(4112KiB/10028msec) 00:37:02.547 slat (nsec): min=7096, max=91128, avg=10830.53, stdev=5924.86 00:37:02.547 clat (usec): min=594, max=47022, avg=38980.78, stdev=8864.51 00:37:02.547 lat (usec): min=601, max=47058, avg=38991.61, stdev=8864.13 00:37:02.547 clat percentiles (usec): 00:37:02.547 | 1.00th=[ 619], 5.00th=[ 693], 10.00th=[41157], 20.00th=[41157], 00:37:02.547 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:02.547 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:02.548 | 99.00th=[41681], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:37:02.548 | 99.99th=[46924] 00:37:02.548 bw ( KiB/s): min= 384, max= 480, per=39.61%, avg=409.60, stdev=32.17, samples=20 00:37:02.548 iops : min= 96, max= 120, avg=102.40, stdev= 8.04, samples=20 00:37:02.548 lat (usec) : 750=5.06% 00:37:02.548 lat (msec) : 50=94.94% 00:37:02.548 cpu : usr=97.81%, sys=1.88%, ctx=19, majf=0, minf=130 00:37:02.548 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.548 issued rwts: total=1028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.548 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:02.548 filename1: (groupid=0, jobs=1): err= 0: pid=786981: Tue Nov 26 18:32:50 2024 00:37:02.548 read: IOPS=155, BW=623KiB/s (638kB/s)(6256KiB/10041msec) 00:37:02.548 slat (nsec): min=4059, max=83851, avg=11111.42, stdev=4985.44 00:37:02.548 clat (usec): min=565, max=47717, avg=25644.31, stdev=19820.68 00:37:02.548 lat (usec): min=573, max=47729, avg=25655.42, stdev=19820.34 00:37:02.548 clat percentiles (usec): 00:37:02.548 | 1.00th=[ 594], 5.00th=[ 611], 10.00th=[ 627], 20.00th=[ 668], 00:37:02.548 | 30.00th=[ 709], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:37:02.548 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:37:02.548 | 99.00th=[42206], 99.50th=[42730], 99.90th=[47449], 99.95th=[47973], 00:37:02.548 | 99.99th=[47973] 00:37:02.548 bw ( KiB/s): min= 384, max= 1088, per=60.43%, avg=624.00, stdev=218.15, samples=20 00:37:02.548 iops : min= 96, max= 272, avg=156.00, stdev=54.54, samples=20 00:37:02.548 lat (usec) : 750=34.85%, 1000=3.77% 00:37:02.548 lat (msec) : 50=61.38% 00:37:02.548 cpu : usr=97.44%, sys=2.17%, ctx=35, majf=0, minf=216 00:37:02.548 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:02.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:02.548 issued rwts: total=1564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:02.548 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:02.548 00:37:02.548 Run status group 0 (all jobs): 00:37:02.548 READ: bw=1033KiB/s (1057kB/s), 410KiB/s-623KiB/s (420kB/s-638kB/s), io=10.1MiB (10.6MB), run=10028-10041msec 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:02.806 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.807 00:37:02.807 real 0m11.363s 00:37:02.807 user 0m20.819s 00:37:02.807 sys 0m0.688s 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.807 18:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:02.807 ************************************ 00:37:02.807 END TEST fio_dif_1_multi_subsystems 00:37:02.807 ************************************ 00:37:02.807 18:32:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:02.807 18:32:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:02.807 18:32:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.807 18:32:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:02.807 ************************************ 00:37:02.807 START TEST fio_dif_rand_params 00:37:02.807 ************************************ 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.807 bdev_null0 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:02.807 [2024-11-26 18:32:50.744012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:02.807 { 00:37:02.807 "params": { 00:37:02.807 "name": "Nvme$subsystem", 00:37:02.807 "trtype": "$TEST_TRANSPORT", 00:37:02.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:02.807 "adrfam": "ipv4", 00:37:02.807 "trsvcid": "$NVMF_PORT", 00:37:02.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:02.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:02.807 "hdgst": ${hdgst:-false}, 00:37:02.807 "ddgst": ${ddgst:-false} 00:37:02.807 }, 00:37:02.807 "method": "bdev_nvme_attach_controller" 00:37:02.807 } 00:37:02.807 EOF 00:37:02.807 )") 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:02.807 "params": { 00:37:02.807 "name": "Nvme0", 00:37:02.807 "trtype": "tcp", 00:37:02.807 "traddr": "10.0.0.2", 00:37:02.807 "adrfam": "ipv4", 00:37:02.807 "trsvcid": "4420", 00:37:02.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.807 "hdgst": false, 00:37:02.807 "ddgst": false 00:37:02.807 }, 00:37:02.807 "method": "bdev_nvme_attach_controller" 00:37:02.807 }' 00:37:02.807 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:02.808 18:32:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:03.066 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:03.066 ... 00:37:03.066 fio-3.35 00:37:03.066 Starting 3 threads 00:37:09.715 00:37:09.715 filename0: (groupid=0, jobs=1): err= 0: pid=788382: Tue Nov 26 18:32:56 2024 00:37:09.715 read: IOPS=226, BW=28.3MiB/s (29.6MB/s)(143MiB/5046msec) 00:37:09.715 slat (nsec): min=7098, max=45113, avg=18075.09, stdev=4663.48 00:37:09.715 clat (usec): min=4579, max=54923, avg=13208.04, stdev=7031.21 00:37:09.715 lat (usec): min=4591, max=54945, avg=13226.11, stdev=7031.10 00:37:09.715 clat percentiles (usec): 00:37:09.715 | 1.00th=[ 4752], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[10159], 00:37:09.715 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12387], 60.00th=[12911], 00:37:09.715 | 70.00th=[13435], 80.00th=[14091], 90.00th=[15270], 95.00th=[16188], 00:37:09.715 | 99.00th=[49546], 99.50th=[53216], 99.90th=[54264], 99.95th=[54789], 00:37:09.715 | 99.99th=[54789] 00:37:09.715 bw ( KiB/s): min=16896, max=33024, per=34.87%, avg=29132.80, stdev=4665.79, samples=10 00:37:09.715 iops : min= 132, max= 258, avg=227.60, stdev=36.45, samples=10 00:37:09.715 lat (msec) : 10=19.28%, 20=77.39%, 50=2.54%, 100=0.79% 00:37:09.715 cpu : usr=94.51%, sys=5.03%, ctx=9, majf=0, minf=57 00:37:09.715 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.715 issued rwts: total=1141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.715 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:09.715 filename0: (groupid=0, jobs=1): err= 0: pid=788383: Tue Nov 26 18:32:56 2024 00:37:09.715 read: IOPS=223, BW=28.0MiB/s (29.3MB/s)(140MiB/5007msec) 00:37:09.715 slat (usec): min=8, max=107, avg=18.67, stdev= 6.45 00:37:09.715 clat (usec): min=4182, max=53001, avg=13386.53, stdev=6717.04 00:37:09.715 lat (usec): min=4195, max=53016, avg=13405.20, stdev=6716.87 00:37:09.715 clat percentiles (usec): 00:37:09.715 | 1.00th=[ 4948], 5.00th=[ 5735], 10.00th=[ 8225], 20.00th=[ 9241], 00:37:09.715 | 30.00th=[11469], 40.00th=[12256], 50.00th=[12911], 60.00th=[13566], 00:37:09.715 | 70.00th=[14484], 80.00th=[15533], 90.00th=[16450], 95.00th=[17171], 00:37:09.715 | 99.00th=[50594], 99.50th=[52691], 99.90th=[52691], 99.95th=[53216], 00:37:09.715 | 99.99th=[53216] 00:37:09.715 bw ( KiB/s): min=24832, max=35328, per=34.23%, avg=28595.20, stdev=3086.31, samples=10 00:37:09.715 iops : min= 194, max= 276, avg=223.40, stdev=24.11, samples=10 00:37:09.715 lat (msec) : 10=23.84%, 20=73.48%, 50=1.43%, 100=1.25% 00:37:09.715 cpu : usr=86.68%, sys=7.93%, ctx=307, majf=0, minf=125 00:37:09.715 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.715 issued rwts: total=1120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.715 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:09.715 filename0: (groupid=0, jobs=1): err= 0: pid=788384: Tue Nov 26 18:32:56 2024 00:37:09.715 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(129MiB/5048msec) 00:37:09.715 slat (usec): min=7, max=106, avg=19.31, stdev= 7.35 00:37:09.715 clat (usec): min=4610, max=91868, avg=14579.98, stdev=10164.84 00:37:09.715 lat (usec): min=4618, max=91882, avg=14599.29, stdev=10164.24 00:37:09.715 clat percentiles (usec): 00:37:09.715 | 1.00th=[ 4817], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[10814], 00:37:09.715 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:37:09.715 | 70.00th=[13042], 80.00th=[13698], 90.00th=[15008], 95.00th=[48497], 00:37:09.715 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53740], 99.95th=[91751], 00:37:09.715 | 99.99th=[91751] 00:37:09.715 bw ( KiB/s): min=15360, max=34048, per=31.59%, avg=26393.60, stdev=6057.40, samples=10 00:37:09.715 iops : min= 120, max= 266, avg=206.20, stdev=47.32, samples=10 00:37:09.715 lat (msec) : 10=10.15%, 20=82.79%, 50=2.80%, 100=4.26% 00:37:09.715 cpu : usr=91.96%, sys=5.59%, ctx=292, majf=0, minf=166 00:37:09.715 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.715 issued rwts: total=1034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.715 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:09.715 00:37:09.715 Run status group 0 (all jobs): 00:37:09.716 READ: bw=81.6MiB/s (85.6MB/s), 25.6MiB/s-28.3MiB/s (26.8MB/s-29.6MB/s), io=412MiB (432MB), run=5007-5048msec 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 bdev_null0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 [2024-11-26 18:32:57.049837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 bdev_null1 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 bdev_null2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:09.716 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:09.717 { 00:37:09.717 "params": { 00:37:09.717 "name": "Nvme$subsystem", 00:37:09.717 "trtype": "$TEST_TRANSPORT", 00:37:09.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.717 "adrfam": "ipv4", 00:37:09.717 "trsvcid": "$NVMF_PORT", 00:37:09.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.717 "hdgst": ${hdgst:-false}, 00:37:09.717 "ddgst": ${ddgst:-false} 00:37:09.717 }, 00:37:09.717 "method": "bdev_nvme_attach_controller" 00:37:09.717 } 00:37:09.717 EOF 00:37:09.717 )") 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:09.717 { 00:37:09.717 "params": { 00:37:09.717 "name": "Nvme$subsystem", 00:37:09.717 "trtype": "$TEST_TRANSPORT", 00:37:09.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.717 "adrfam": "ipv4", 00:37:09.717 "trsvcid": "$NVMF_PORT", 00:37:09.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.717 "hdgst": ${hdgst:-false}, 00:37:09.717 "ddgst": ${ddgst:-false} 00:37:09.717 }, 00:37:09.717 "method": "bdev_nvme_attach_controller" 00:37:09.717 } 00:37:09.717 EOF 00:37:09.717 )") 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:09.717 { 00:37:09.717 "params": { 00:37:09.717 "name": "Nvme$subsystem", 00:37:09.717 "trtype": "$TEST_TRANSPORT", 00:37:09.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.717 "adrfam": "ipv4", 00:37:09.717 "trsvcid": "$NVMF_PORT", 00:37:09.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.717 "hdgst": ${hdgst:-false}, 00:37:09.717 "ddgst": ${ddgst:-false} 00:37:09.717 }, 00:37:09.717 "method": "bdev_nvme_attach_controller" 00:37:09.717 } 00:37:09.717 EOF 00:37:09.717 )") 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:09.717 "params": { 00:37:09.717 "name": "Nvme0", 00:37:09.717 "trtype": "tcp", 00:37:09.717 "traddr": "10.0.0.2", 00:37:09.717 "adrfam": "ipv4", 00:37:09.717 "trsvcid": "4420", 00:37:09.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.717 "hdgst": false, 00:37:09.717 "ddgst": false 00:37:09.717 }, 00:37:09.717 "method": "bdev_nvme_attach_controller" 00:37:09.717 },{ 00:37:09.717 "params": { 00:37:09.717 "name": "Nvme1", 00:37:09.717 "trtype": "tcp", 00:37:09.717 "traddr": "10.0.0.2", 00:37:09.717 "adrfam": "ipv4", 00:37:09.717 "trsvcid": "4420", 00:37:09.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:09.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:09.717 "hdgst": false, 00:37:09.717 "ddgst": false 00:37:09.717 }, 00:37:09.717 "method": "bdev_nvme_attach_controller" 00:37:09.717 },{ 00:37:09.717 "params": { 00:37:09.717 "name": "Nvme2", 00:37:09.717 "trtype": "tcp", 00:37:09.717 "traddr": "10.0.0.2", 00:37:09.717 "adrfam": "ipv4", 00:37:09.717 "trsvcid": "4420", 00:37:09.717 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:09.717 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:09.717 "hdgst": false, 00:37:09.717 "ddgst": false 00:37:09.717 }, 00:37:09.717 "method": "bdev_nvme_attach_controller" 00:37:09.717 }' 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:09.717 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:09.718 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:09.718 18:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.718 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:09.718 ... 00:37:09.718 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:09.718 ... 00:37:09.718 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:09.718 ... 00:37:09.718 fio-3.35 00:37:09.718 Starting 24 threads 00:37:21.924 00:37:21.924 filename0: (groupid=0, jobs=1): err= 0: pid=789251: Tue Nov 26 18:33:08 2024 00:37:21.924 read: IOPS=460, BW=1843KiB/s (1887kB/s)(18.0MiB/10003msec) 00:37:21.924 slat (nsec): min=6339, max=72897, avg=31661.16, stdev=11153.69 00:37:21.924 clat (msec): min=23, max=272, avg=34.45, stdev=14.11 00:37:21.924 lat (msec): min=23, max=272, avg=34.48, stdev=14.11 00:37:21.924 clat percentiles (msec): 00:37:21.924 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.924 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.924 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.924 | 99.00th=[ 39], 99.50th=[ 44], 99.90th=[ 275], 99.95th=[ 275], 00:37:21.924 | 99.99th=[ 275] 00:37:21.924 bw ( KiB/s): min= 896, max= 1920, per=4.16%, avg=1839.16, stdev=234.51, samples=19 00:37:21.924 iops : min= 224, max= 480, avg=459.79, stdev=58.63, samples=19 00:37:21.924 lat (msec) : 50=99.65%, 500=0.35% 00:37:21.924 cpu : usr=98.58%, sys=1.04%, ctx=13, majf=0, minf=21 00:37:21.924 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.924 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.924 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.924 filename0: (groupid=0, jobs=1): err= 0: pid=789252: Tue Nov 26 18:33:08 2024 00:37:21.924 read: IOPS=464, BW=1856KiB/s (1901kB/s)(18.1MiB/10007msec) 00:37:21.924 slat (usec): min=8, max=117, avg=47.76, stdev=27.19 00:37:21.924 clat (msec): min=8, max=259, avg=34.05, stdev=13.45 00:37:21.924 lat (msec): min=8, max=259, avg=34.10, stdev=13.44 00:37:21.924 clat percentiles (msec): 00:37:21.924 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:37:21.924 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.924 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.924 | 99.00th=[ 43], 99.50th=[ 49], 99.90th=[ 259], 99.95th=[ 259], 00:37:21.924 | 99.99th=[ 259] 00:37:21.924 bw ( KiB/s): min= 1008, max= 1968, per=4.18%, avg=1848.42, stdev=211.15, samples=19 00:37:21.924 iops : min= 252, max= 492, avg=462.11, stdev=52.79, samples=19 00:37:21.924 lat (msec) : 10=0.30%, 20=0.09%, 50=99.22%, 100=0.04%, 500=0.34% 00:37:21.924 cpu : usr=97.26%, sys=1.74%, ctx=156, majf=0, minf=28 00:37:21.924 IO depths : 1=5.5%, 2=11.4%, 4=23.5%, 8=52.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:21.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.924 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.924 issued rwts: total=4644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.924 filename0: (groupid=0, jobs=1): err= 0: pid=789253: Tue Nov 26 18:33:08 2024 00:37:21.924 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:37:21.924 slat (usec): min=8, max=111, avg=39.71, stdev=22.20 00:37:21.924 clat (msec): min=23, max=275, avg=34.41, stdev=14.24 00:37:21.924 lat (msec): min=23, max=275, avg=34.45, stdev=14.24 00:37:21.924 clat percentiles (msec): 00:37:21.924 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:37:21.924 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.924 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.924 | 99.00th=[ 39], 99.50th=[ 45], 99.90th=[ 275], 99.95th=[ 275], 00:37:21.924 | 99.99th=[ 275] 00:37:21.924 bw ( KiB/s): min= 896, max= 1920, per=4.16%, avg=1839.16, stdev=234.51, samples=19 00:37:21.924 iops : min= 224, max= 480, avg=459.79, stdev=58.63, samples=19 00:37:21.924 lat (msec) : 50=99.65%, 500=0.35% 00:37:21.924 cpu : usr=97.52%, sys=1.71%, ctx=54, majf=0, minf=16 00:37:21.924 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:21.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.924 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.924 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.924 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.924 filename0: (groupid=0, jobs=1): err= 0: pid=789254: Tue Nov 26 18:33:08 2024 00:37:21.924 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10007msec) 00:37:21.924 slat (usec): min=8, max=107, avg=39.65, stdev=19.09 00:37:21.924 clat (msec): min=17, max=283, avg=34.41, stdev=14.73 00:37:21.924 lat (msec): min=17, max=283, avg=34.45, stdev=14.73 00:37:21.924 clat percentiles (msec): 00:37:21.924 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.924 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.924 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.924 | 99.00th=[ 41], 99.50th=[ 42], 99.90th=[ 284], 99.95th=[ 284], 00:37:21.924 | 99.99th=[ 284] 00:37:21.924 bw ( KiB/s): min= 897, max= 2032, per=4.16%, avg=1842.45, stdev=230.56, samples=20 00:37:21.924 iops : min= 224, max= 508, avg=460.60, stdev=57.69, samples=20 00:37:21.925 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.925 cpu : usr=97.52%, sys=1.67%, ctx=116, majf=0, minf=16 00:37:21.925 IO depths : 1=2.9%, 2=9.1%, 4=25.0%, 8=53.4%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:21.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.925 filename0: (groupid=0, jobs=1): err= 0: pid=789255: Tue Nov 26 18:33:08 2024 00:37:21.925 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10006msec) 00:37:21.925 slat (usec): min=7, max=117, avg=40.42, stdev=17.93 00:37:21.925 clat (msec): min=22, max=145, avg=34.26, stdev= 7.22 00:37:21.925 lat (msec): min=22, max=145, avg=34.30, stdev= 7.22 00:37:21.925 clat percentiles (msec): 00:37:21.925 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:37:21.925 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.925 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.925 | 99.00th=[ 68], 99.50th=[ 87], 99.90th=[ 133], 99.95th=[ 133], 00:37:21.925 | 99.99th=[ 146] 00:37:21.925 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1845.89, stdev=206.02, samples=19 00:37:21.925 iops : min= 256, max= 480, avg=461.47, stdev=51.51, samples=19 00:37:21.925 lat (msec) : 50=98.96%, 100=0.69%, 250=0.35% 00:37:21.925 cpu : usr=97.91%, sys=1.47%, ctx=63, majf=0, minf=25 00:37:21.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.925 filename0: (groupid=0, jobs=1): err= 0: pid=789256: Tue Nov 26 18:33:08 2024 00:37:21.925 read: IOPS=459, BW=1840KiB/s (1884kB/s)(18.0MiB/10019msec) 00:37:21.925 slat (nsec): min=5203, max=68067, avg=27626.15, stdev=12123.61 00:37:21.925 clat (msec): min=18, max=307, avg=34.56, stdev=15.57 00:37:21.925 lat (msec): min=18, max=307, avg=34.59, stdev=15.57 00:37:21.925 clat percentiles (msec): 00:37:21.925 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.925 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.925 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.925 | 99.00th=[ 42], 99.50th=[ 46], 99.90th=[ 296], 99.95th=[ 296], 00:37:21.925 | 99.99th=[ 309] 00:37:21.925 bw ( KiB/s): min= 769, max= 1920, per=4.15%, avg=1836.85, stdev=256.71, samples=20 00:37:21.925 iops : min= 192, max= 480, avg=459.20, stdev=64.23, samples=20 00:37:21.925 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.925 cpu : usr=97.56%, sys=1.63%, ctx=116, majf=0, minf=20 00:37:21.925 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:21.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.925 filename0: (groupid=0, jobs=1): err= 0: pid=789257: Tue Nov 26 18:33:08 2024 00:37:21.925 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10006msec) 00:37:21.925 slat (usec): min=8, max=116, avg=46.80, stdev=23.13 00:37:21.925 clat (msec): min=31, max=146, avg=34.22, stdev= 7.23 00:37:21.925 lat (msec): min=31, max=146, avg=34.27, stdev= 7.23 00:37:21.925 clat percentiles (msec): 00:37:21.925 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:37:21.925 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.925 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.925 | 99.00th=[ 67], 99.50th=[ 86], 99.90th=[ 133], 99.95th=[ 133], 00:37:21.925 | 99.99th=[ 146] 00:37:21.925 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1845.89, stdev=206.02, samples=19 00:37:21.925 iops : min= 256, max= 480, avg=461.47, stdev=51.51, samples=19 00:37:21.925 lat (msec) : 50=98.96%, 100=0.69%, 250=0.35% 00:37:21.925 cpu : usr=97.83%, sys=1.48%, ctx=117, majf=0, minf=29 00:37:21.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.925 filename0: (groupid=0, jobs=1): err= 0: pid=789258: Tue Nov 26 18:33:08 2024 00:37:21.925 read: IOPS=461, BW=1846KiB/s (1891kB/s)(18.1MiB/10018msec) 00:37:21.925 slat (usec): min=5, max=109, avg=26.19, stdev=18.80 00:37:21.925 clat (msec): min=31, max=133, avg=34.45, stdev= 7.49 00:37:21.925 lat (msec): min=31, max=133, avg=34.48, stdev= 7.49 00:37:21.925 clat percentiles (msec): 00:37:21.925 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.925 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.925 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.925 | 99.00th=[ 70], 99.50th=[ 97], 99.90th=[ 133], 99.95th=[ 134], 00:37:21.925 | 99.99th=[ 134] 00:37:21.925 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1843.20, stdev=200.89, samples=20 00:37:21.925 iops : min= 256, max= 480, avg=460.80, stdev=50.22, samples=20 00:37:21.925 lat (msec) : 50=98.96%, 100=0.65%, 250=0.39% 00:37:21.925 cpu : usr=96.37%, sys=2.25%, ctx=256, majf=0, minf=45 00:37:21.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.925 filename1: (groupid=0, jobs=1): err= 0: pid=789259: Tue Nov 26 18:33:08 2024 00:37:21.925 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:37:21.925 slat (usec): min=9, max=119, avg=46.66, stdev=22.08 00:37:21.925 clat (msec): min=15, max=269, avg=34.21, stdev=13.49 00:37:21.925 lat (msec): min=15, max=269, avg=34.26, stdev=13.49 00:37:21.925 clat percentiles (msec): 00:37:21.925 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:37:21.925 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.925 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.925 | 99.00th=[ 39], 99.50th=[ 45], 99.90th=[ 262], 99.95th=[ 262], 00:37:21.925 | 99.99th=[ 271] 00:37:21.925 bw ( KiB/s): min= 896, max= 1920, per=4.17%, avg=1843.20, stdev=228.97, samples=20 00:37:21.925 iops : min= 224, max= 480, avg=460.80, stdev=57.24, samples=20 00:37:21.925 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.925 cpu : usr=98.00%, sys=1.30%, ctx=74, majf=0, minf=23 00:37:21.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.925 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.925 filename1: (groupid=0, jobs=1): err= 0: pid=789260: Tue Nov 26 18:33:08 2024 00:37:21.925 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10006msec) 00:37:21.925 slat (nsec): min=8513, max=66995, avg=30331.64, stdev=10753.82 00:37:21.925 clat (msec): min=22, max=133, avg=34.37, stdev= 7.17 00:37:21.925 lat (msec): min=22, max=133, avg=34.40, stdev= 7.17 00:37:21.925 clat percentiles (msec): 00:37:21.925 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.925 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.925 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.925 | 99.00th=[ 81], 99.50th=[ 86], 99.90th=[ 133], 99.95th=[ 133], 00:37:21.925 | 99.99th=[ 134] 00:37:21.925 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1845.89, stdev=206.02, samples=19 00:37:21.925 iops : min= 256, max= 480, avg=461.47, stdev=51.51, samples=19 00:37:21.925 lat (msec) : 50=98.96%, 100=0.69%, 250=0.35% 00:37:21.925 cpu : usr=97.47%, sys=1.65%, ctx=131, majf=0, minf=26 00:37:21.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.926 filename1: (groupid=0, jobs=1): err= 0: pid=789261: Tue Nov 26 18:33:08 2024 00:37:21.926 read: IOPS=461, BW=1848KiB/s (1892kB/s)(18.1MiB/10006msec) 00:37:21.926 slat (usec): min=8, max=132, avg=58.86, stdev=25.32 00:37:21.926 clat (msec): min=16, max=259, avg=34.13, stdev=13.34 00:37:21.926 lat (msec): min=16, max=259, avg=34.19, stdev=13.33 00:37:21.926 clat percentiles (msec): 00:37:21.926 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:37:21.926 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.926 | 99.00th=[ 39], 99.50th=[ 45], 99.90th=[ 259], 99.95th=[ 259], 00:37:21.926 | 99.99th=[ 259] 00:37:21.926 bw ( KiB/s): min= 1024, max= 1920, per=4.15%, avg=1838.32, stdev=205.24, samples=19 00:37:21.926 iops : min= 256, max= 480, avg=459.58, stdev=51.31, samples=19 00:37:21.926 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.926 cpu : usr=97.10%, sys=1.73%, ctx=212, majf=0, minf=25 00:37:21.926 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:37:21.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 issued rwts: total=4622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.926 filename1: (groupid=0, jobs=1): err= 0: pid=789262: Tue Nov 26 18:33:08 2024 00:37:21.926 read: IOPS=460, BW=1843KiB/s (1887kB/s)(18.0MiB/10003msec) 00:37:21.926 slat (usec): min=8, max=102, avg=36.11, stdev=14.98 00:37:21.926 clat (msec): min=23, max=272, avg=34.41, stdev=14.10 00:37:21.926 lat (msec): min=23, max=272, avg=34.45, stdev=14.10 00:37:21.926 clat percentiles (msec): 00:37:21.926 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.926 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.926 | 99.00th=[ 39], 99.50th=[ 45], 99.90th=[ 275], 99.95th=[ 275], 00:37:21.926 | 99.99th=[ 275] 00:37:21.926 bw ( KiB/s): min= 896, max= 1920, per=4.16%, avg=1839.16, stdev=234.51, samples=19 00:37:21.926 iops : min= 224, max= 480, avg=459.79, stdev=58.63, samples=19 00:37:21.926 lat (msec) : 50=99.65%, 500=0.35% 00:37:21.926 cpu : usr=98.21%, sys=1.34%, ctx=30, majf=0, minf=27 00:37:21.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:21.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.926 filename1: (groupid=0, jobs=1): err= 0: pid=789263: Tue Nov 26 18:33:08 2024 00:37:21.926 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10006msec) 00:37:21.926 slat (usec): min=8, max=136, avg=37.57, stdev=19.69 00:37:21.926 clat (msec): min=31, max=132, avg=34.29, stdev= 7.16 00:37:21.926 lat (msec): min=31, max=132, avg=34.33, stdev= 7.16 00:37:21.926 clat percentiles (msec): 00:37:21.926 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.926 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.926 | 99.00th=[ 81], 99.50th=[ 86], 99.90th=[ 133], 99.95th=[ 133], 00:37:21.926 | 99.99th=[ 133] 00:37:21.926 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1845.89, stdev=206.02, samples=19 00:37:21.926 iops : min= 256, max= 480, avg=461.47, stdev=51.51, samples=19 00:37:21.926 lat (msec) : 50=98.96%, 100=0.69%, 250=0.35% 00:37:21.926 cpu : usr=98.22%, sys=1.36%, ctx=25, majf=0, minf=22 00:37:21.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:21.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.926 filename1: (groupid=0, jobs=1): err= 0: pid=789264: Tue Nov 26 18:33:08 2024 00:37:21.926 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10006msec) 00:37:21.926 slat (usec): min=7, max=118, avg=57.63, stdev=24.20 00:37:21.926 clat (msec): min=22, max=132, avg=34.11, stdev= 7.17 00:37:21.926 lat (msec): min=23, max=132, avg=34.16, stdev= 7.17 00:37:21.926 clat percentiles (msec): 00:37:21.926 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:37:21.926 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.926 | 99.00th=[ 81], 99.50th=[ 87], 99.90th=[ 133], 99.95th=[ 133], 00:37:21.926 | 99.99th=[ 133] 00:37:21.926 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1845.89, stdev=206.02, samples=19 00:37:21.926 iops : min= 256, max= 480, avg=461.47, stdev=51.51, samples=19 00:37:21.926 lat (msec) : 50=98.96%, 100=0.69%, 250=0.35% 00:37:21.926 cpu : usr=97.66%, sys=1.52%, ctx=65, majf=0, minf=24 00:37:21.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.926 filename1: (groupid=0, jobs=1): err= 0: pid=789265: Tue Nov 26 18:33:08 2024 00:37:21.926 read: IOPS=461, BW=1847KiB/s (1892kB/s)(18.1MiB/10012msec) 00:37:21.926 slat (usec): min=3, max=132, avg=59.85, stdev=32.19 00:37:21.926 clat (msec): min=25, max=126, avg=34.10, stdev= 7.20 00:37:21.926 lat (msec): min=25, max=126, avg=34.16, stdev= 7.20 00:37:21.926 clat percentiles (msec): 00:37:21.926 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:37:21.926 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.926 | 99.00th=[ 86], 99.50th=[ 93], 99.90th=[ 127], 99.95th=[ 127], 00:37:21.926 | 99.99th=[ 127] 00:37:21.926 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1843.20, stdev=200.89, samples=20 00:37:21.926 iops : min= 256, max= 480, avg=460.80, stdev=50.22, samples=20 00:37:21.926 lat (msec) : 50=98.96%, 100=0.69%, 250=0.35% 00:37:21.926 cpu : usr=98.11%, sys=1.38%, ctx=43, majf=0, minf=28 00:37:21.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.926 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.926 filename1: (groupid=0, jobs=1): err= 0: pid=789266: Tue Nov 26 18:33:08 2024 00:37:21.926 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:37:21.926 slat (usec): min=8, max=106, avg=37.48, stdev=18.99 00:37:21.926 clat (msec): min=18, max=282, avg=34.39, stdev=14.68 00:37:21.926 lat (msec): min=18, max=282, avg=34.43, stdev=14.68 00:37:21.926 clat percentiles (msec): 00:37:21.926 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:37:21.926 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.926 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.926 | 99.00th=[ 41], 99.50th=[ 42], 99.90th=[ 284], 99.95th=[ 284], 00:37:21.926 | 99.99th=[ 284] 00:37:21.927 bw ( KiB/s): min= 896, max= 1920, per=4.14%, avg=1832.42, stdev=233.90, samples=19 00:37:21.927 iops : min= 224, max= 480, avg=458.11, stdev=58.47, samples=19 00:37:21.927 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.927 cpu : usr=97.93%, sys=1.54%, ctx=50, majf=0, minf=22 00:37:21.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.927 filename2: (groupid=0, jobs=1): err= 0: pid=789267: Tue Nov 26 18:33:08 2024 00:37:21.927 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10006msec) 00:37:21.927 slat (nsec): min=7925, max=64402, avg=28690.79, stdev=12645.31 00:37:21.927 clat (msec): min=11, max=282, avg=34.49, stdev=14.67 00:37:21.927 lat (msec): min=11, max=282, avg=34.52, stdev=14.67 00:37:21.927 clat percentiles (msec): 00:37:21.927 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.927 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.927 | 99.00th=[ 41], 99.50th=[ 43], 99.90th=[ 284], 99.95th=[ 284], 00:37:21.927 | 99.99th=[ 284] 00:37:21.927 bw ( KiB/s): min= 894, max= 1920, per=4.14%, avg=1832.32, stdev=233.07, samples=19 00:37:21.927 iops : min= 223, max= 480, avg=458.05, stdev=58.38, samples=19 00:37:21.927 lat (msec) : 20=0.35%, 50=99.26%, 100=0.04%, 500=0.35% 00:37:21.927 cpu : usr=98.23%, sys=1.39%, ctx=13, majf=0, minf=30 00:37:21.927 IO depths : 1=2.8%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:37:21.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.927 filename2: (groupid=0, jobs=1): err= 0: pid=789268: Tue Nov 26 18:33:08 2024 00:37:21.927 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10006msec) 00:37:21.927 slat (usec): min=8, max=124, avg=51.36, stdev=24.58 00:37:21.927 clat (msec): min=22, max=132, avg=34.15, stdev= 7.18 00:37:21.927 lat (msec): min=23, max=132, avg=34.21, stdev= 7.18 00:37:21.927 clat percentiles (msec): 00:37:21.927 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:37:21.927 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.927 | 99.00th=[ 81], 99.50th=[ 87], 99.90th=[ 133], 99.95th=[ 133], 00:37:21.927 | 99.99th=[ 133] 00:37:21.927 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1845.89, stdev=206.02, samples=19 00:37:21.927 iops : min= 256, max= 480, avg=461.47, stdev=51.51, samples=19 00:37:21.927 lat (msec) : 50=98.96%, 100=0.69%, 250=0.35% 00:37:21.927 cpu : usr=96.69%, sys=2.10%, ctx=203, majf=0, minf=33 00:37:21.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.927 filename2: (groupid=0, jobs=1): err= 0: pid=789269: Tue Nov 26 18:33:08 2024 00:37:21.927 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10007msec) 00:37:21.927 slat (usec): min=9, max=118, avg=37.82, stdev=16.88 00:37:21.927 clat (msec): min=22, max=133, avg=34.29, stdev= 7.18 00:37:21.927 lat (msec): min=22, max=133, avg=34.33, stdev= 7.18 00:37:21.927 clat percentiles (msec): 00:37:21.927 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.927 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.927 | 99.00th=[ 81], 99.50th=[ 86], 99.90th=[ 133], 99.95th=[ 133], 00:37:21.927 | 99.99th=[ 133] 00:37:21.927 bw ( KiB/s): min= 1024, max= 1920, per=4.17%, avg=1845.89, stdev=206.02, samples=19 00:37:21.927 iops : min= 256, max= 480, avg=461.47, stdev=51.51, samples=19 00:37:21.927 lat (msec) : 50=98.96%, 100=0.69%, 250=0.35% 00:37:21.927 cpu : usr=97.32%, sys=1.65%, ctx=101, majf=0, minf=32 00:37:21.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.927 filename2: (groupid=0, jobs=1): err= 0: pid=789270: Tue Nov 26 18:33:08 2024 00:37:21.927 read: IOPS=462, BW=1848KiB/s (1893kB/s)(18.1MiB/10006msec) 00:37:21.927 slat (usec): min=8, max=107, avg=37.63, stdev=16.47 00:37:21.927 clat (msec): min=16, max=259, avg=34.29, stdev=13.35 00:37:21.927 lat (msec): min=16, max=259, avg=34.33, stdev=13.35 00:37:21.927 clat percentiles (msec): 00:37:21.927 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.927 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.927 | 99.00th=[ 43], 99.50th=[ 45], 99.90th=[ 259], 99.95th=[ 259], 00:37:21.927 | 99.99th=[ 259] 00:37:21.927 bw ( KiB/s): min= 1024, max= 1920, per=4.16%, avg=1839.16, stdev=205.56, samples=19 00:37:21.927 iops : min= 256, max= 480, avg=459.79, stdev=51.39, samples=19 00:37:21.927 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.927 cpu : usr=97.99%, sys=1.40%, ctx=46, majf=0, minf=28 00:37:21.927 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:37:21.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.927 filename2: (groupid=0, jobs=1): err= 0: pid=789271: Tue Nov 26 18:33:08 2024 00:37:21.927 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:37:21.927 slat (usec): min=8, max=106, avg=27.39, stdev=19.56 00:37:21.927 clat (msec): min=18, max=282, avg=34.52, stdev=14.73 00:37:21.927 lat (msec): min=18, max=282, avg=34.55, stdev=14.73 00:37:21.927 clat percentiles (msec): 00:37:21.927 | 1.00th=[ 32], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:37:21.927 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.927 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.927 | 99.00th=[ 43], 99.50th=[ 46], 99.90th=[ 284], 99.95th=[ 284], 00:37:21.927 | 99.99th=[ 284] 00:37:21.927 bw ( KiB/s): min= 896, max= 1936, per=4.14%, avg=1832.42, stdev=233.96, samples=19 00:37:21.927 iops : min= 224, max= 484, avg=458.11, stdev=58.49, samples=19 00:37:21.927 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.927 cpu : usr=96.96%, sys=1.90%, ctx=166, majf=0, minf=23 00:37:21.927 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:37:21.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.927 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.928 filename2: (groupid=0, jobs=1): err= 0: pid=789272: Tue Nov 26 18:33:08 2024 00:37:21.928 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:37:21.928 slat (usec): min=8, max=122, avg=46.39, stdev=25.24 00:37:21.928 clat (msec): min=23, max=275, avg=34.34, stdev=14.25 00:37:21.928 lat (msec): min=23, max=275, avg=34.39, stdev=14.24 00:37:21.928 clat percentiles (msec): 00:37:21.928 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:37:21.928 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.928 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.928 | 99.00th=[ 39], 99.50th=[ 45], 99.90th=[ 275], 99.95th=[ 275], 00:37:21.928 | 99.99th=[ 275] 00:37:21.928 bw ( KiB/s): min= 896, max= 1920, per=4.16%, avg=1839.16, stdev=234.51, samples=19 00:37:21.928 iops : min= 224, max= 480, avg=459.79, stdev=58.63, samples=19 00:37:21.928 lat (msec) : 50=99.65%, 500=0.35% 00:37:21.928 cpu : usr=98.22%, sys=1.23%, ctx=48, majf=0, minf=25 00:37:21.928 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:21.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.928 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.928 filename2: (groupid=0, jobs=1): err= 0: pid=789273: Tue Nov 26 18:33:08 2024 00:37:21.928 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:37:21.928 slat (usec): min=8, max=107, avg=44.29, stdev=22.95 00:37:21.928 clat (msec): min=18, max=282, avg=34.33, stdev=14.68 00:37:21.928 lat (msec): min=18, max=282, avg=34.37, stdev=14.67 00:37:21.928 clat percentiles (msec): 00:37:21.928 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:37:21.928 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.928 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.928 | 99.00th=[ 41], 99.50th=[ 42], 99.90th=[ 284], 99.95th=[ 284], 00:37:21.928 | 99.99th=[ 284] 00:37:21.928 bw ( KiB/s): min= 896, max= 1920, per=4.14%, avg=1832.42, stdev=233.90, samples=19 00:37:21.928 iops : min= 224, max= 480, avg=458.11, stdev=58.47, samples=19 00:37:21.928 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.928 cpu : usr=98.14%, sys=1.23%, ctx=40, majf=0, minf=23 00:37:21.928 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:21.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.928 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.928 filename2: (groupid=0, jobs=1): err= 0: pid=789274: Tue Nov 26 18:33:08 2024 00:37:21.928 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10005msec) 00:37:21.928 slat (usec): min=10, max=124, avg=46.84, stdev=23.52 00:37:21.928 clat (msec): min=18, max=282, avg=34.31, stdev=14.72 00:37:21.928 lat (msec): min=18, max=282, avg=34.36, stdev=14.72 00:37:21.928 clat percentiles (msec): 00:37:21.928 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:37:21.928 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:37:21.928 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 35], 00:37:21.928 | 99.00th=[ 41], 99.50th=[ 43], 99.90th=[ 284], 99.95th=[ 284], 00:37:21.928 | 99.99th=[ 284] 00:37:21.928 bw ( KiB/s): min= 897, max= 1920, per=4.14%, avg=1832.47, stdev=233.68, samples=19 00:37:21.928 iops : min= 224, max= 480, avg=458.11, stdev=58.47, samples=19 00:37:21.928 lat (msec) : 20=0.35%, 50=99.31%, 500=0.35% 00:37:21.928 cpu : usr=97.97%, sys=1.46%, ctx=35, majf=0, minf=20 00:37:21.928 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:21.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.928 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:21.928 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:21.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:21.928 00:37:21.928 Run status group 0 (all jobs): 00:37:21.928 READ: bw=43.2MiB/s (45.3MB/s), 1840KiB/s-1856KiB/s (1884kB/s-1901kB/s), io=433MiB (454MB), run=10003-10019msec 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.928 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.928 bdev_null0 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.929 [2024-11-26 18:33:09.111097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.929 bdev_null1 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:21.929 { 00:37:21.929 "params": { 00:37:21.929 "name": "Nvme$subsystem", 00:37:21.929 "trtype": "$TEST_TRANSPORT", 00:37:21.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:21.929 "adrfam": "ipv4", 00:37:21.929 "trsvcid": "$NVMF_PORT", 00:37:21.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:21.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:21.929 "hdgst": ${hdgst:-false}, 00:37:21.929 "ddgst": ${ddgst:-false} 00:37:21.929 }, 00:37:21.929 "method": "bdev_nvme_attach_controller" 00:37:21.929 } 00:37:21.929 EOF 00:37:21.929 )") 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:21.929 { 00:37:21.929 "params": { 00:37:21.929 "name": "Nvme$subsystem", 00:37:21.929 "trtype": "$TEST_TRANSPORT", 00:37:21.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:21.929 "adrfam": "ipv4", 00:37:21.929 "trsvcid": "$NVMF_PORT", 00:37:21.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:21.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:21.929 "hdgst": ${hdgst:-false}, 00:37:21.929 "ddgst": ${ddgst:-false} 00:37:21.929 }, 00:37:21.929 "method": "bdev_nvme_attach_controller" 00:37:21.929 } 00:37:21.929 EOF 00:37:21.929 )") 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:21.929 18:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:21.929 "params": { 00:37:21.929 "name": "Nvme0", 00:37:21.929 "trtype": "tcp", 00:37:21.929 "traddr": "10.0.0.2", 00:37:21.929 "adrfam": "ipv4", 00:37:21.929 "trsvcid": "4420", 00:37:21.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:21.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:21.929 "hdgst": false, 00:37:21.929 "ddgst": false 00:37:21.929 }, 00:37:21.929 "method": "bdev_nvme_attach_controller" 00:37:21.929 },{ 00:37:21.929 "params": { 00:37:21.929 "name": "Nvme1", 00:37:21.930 "trtype": "tcp", 00:37:21.930 "traddr": "10.0.0.2", 00:37:21.930 "adrfam": "ipv4", 00:37:21.930 "trsvcid": "4420", 00:37:21.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:21.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:21.930 "hdgst": false, 00:37:21.930 "ddgst": false 00:37:21.930 }, 00:37:21.930 "method": "bdev_nvme_attach_controller" 00:37:21.930 }' 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:21.930 18:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:21.930 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:21.930 ... 00:37:21.930 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:21.930 ... 00:37:21.930 fio-3.35 00:37:21.930 Starting 4 threads 00:37:28.493 00:37:28.493 filename0: (groupid=0, jobs=1): err= 0: pid=791272: Tue Nov 26 18:33:15 2024 00:37:28.493 read: IOPS=1833, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5003msec) 00:37:28.493 slat (nsec): min=6937, max=72781, avg=18588.98, stdev=9992.89 00:37:28.493 clat (usec): min=708, max=7916, avg=4292.97, stdev=643.08 00:37:28.493 lat (usec): min=721, max=7939, avg=4311.55, stdev=642.87 00:37:28.493 clat percentiles (usec): 00:37:28.493 | 1.00th=[ 2245], 5.00th=[ 3458], 10.00th=[ 3752], 20.00th=[ 4015], 00:37:28.493 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4293], 00:37:28.493 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4817], 95.00th=[ 5342], 00:37:28.493 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7570], 99.95th=[ 7767], 00:37:28.493 | 99.99th=[ 7898] 00:37:28.493 bw ( KiB/s): min=14172, max=15072, per=24.78%, avg=14627.11, stdev=308.58, samples=9 00:37:28.493 iops : min= 1771, max= 1884, avg=1828.33, stdev=38.67, samples=9 00:37:28.493 lat (usec) : 750=0.02%, 1000=0.04% 00:37:28.493 lat (msec) : 2=0.65%, 4=18.28%, 10=81.00% 00:37:28.493 cpu : usr=95.40%, sys=4.12%, ctx=11, majf=0, minf=120 00:37:28.493 IO depths : 1=0.5%, 2=16.4%, 4=56.4%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.493 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.493 issued rwts: total=9174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.493 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:28.493 filename0: (groupid=0, jobs=1): err= 0: pid=791273: Tue Nov 26 18:33:15 2024 00:37:28.493 read: IOPS=1909, BW=14.9MiB/s (15.6MB/s)(74.6MiB/5004msec) 00:37:28.493 slat (nsec): min=7658, max=66402, avg=17088.98, stdev=9146.67 00:37:28.493 clat (usec): min=908, max=7645, avg=4130.45, stdev=531.61 00:37:28.493 lat (usec): min=928, max=7662, avg=4147.54, stdev=532.85 00:37:28.493 clat percentiles (usec): 00:37:28.493 | 1.00th=[ 2474], 5.00th=[ 3261], 10.00th=[ 3523], 20.00th=[ 3818], 00:37:28.493 | 30.00th=[ 3982], 40.00th=[ 4113], 50.00th=[ 4228], 60.00th=[ 4293], 00:37:28.493 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4752], 00:37:28.493 | 99.00th=[ 5800], 99.50th=[ 6521], 99.90th=[ 7439], 99.95th=[ 7439], 00:37:28.493 | 99.99th=[ 7635] 00:37:28.493 bw ( KiB/s): min=14736, max=16032, per=25.89%, avg=15285.33, stdev=437.30, samples=9 00:37:28.493 iops : min= 1842, max= 2004, avg=1910.67, stdev=54.66, samples=9 00:37:28.493 lat (usec) : 1000=0.02% 00:37:28.493 lat (msec) : 2=0.44%, 4=29.97%, 10=69.57% 00:37:28.493 cpu : usr=95.22%, sys=4.26%, ctx=21, majf=0, minf=91 00:37:28.493 IO depths : 1=0.7%, 2=14.9%, 4=58.0%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.493 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.493 issued rwts: total=9554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.493 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:28.493 filename1: (groupid=0, jobs=1): err= 0: pid=791274: Tue Nov 26 18:33:15 2024 00:37:28.493 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.4MiB/5001msec) 00:37:28.493 slat (nsec): min=6914, max=72805, avg=17795.70, stdev=9172.46 00:37:28.493 clat (usec): min=900, max=7851, avg=4313.51, stdev=651.98 00:37:28.493 lat (usec): min=913, max=7867, avg=4331.31, stdev=651.87 00:37:28.493 clat percentiles (usec): 00:37:28.493 | 1.00th=[ 2147], 5.00th=[ 3523], 10.00th=[ 3785], 20.00th=[ 4015], 00:37:28.493 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:37:28.493 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5473], 00:37:28.493 | 99.00th=[ 6783], 99.50th=[ 7111], 99.90th=[ 7504], 99.95th=[ 7570], 00:37:28.493 | 99.99th=[ 7832] 00:37:28.493 bw ( KiB/s): min=14176, max=14944, per=24.75%, avg=14613.33, stdev=238.13, samples=9 00:37:28.493 iops : min= 1772, max= 1868, avg=1826.67, stdev=29.77, samples=9 00:37:28.493 lat (usec) : 1000=0.02% 00:37:28.493 lat (msec) : 2=0.87%, 4=17.19%, 10=81.91% 00:37:28.493 cpu : usr=94.90%, sys=4.58%, ctx=7, majf=0, minf=50 00:37:28.493 IO depths : 1=0.4%, 2=13.4%, 4=58.8%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.493 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.493 issued rwts: total=9144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.493 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:28.493 filename1: (groupid=0, jobs=1): err= 0: pid=791275: Tue Nov 26 18:33:15 2024 00:37:28.493 read: IOPS=1809, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5002msec) 00:37:28.493 slat (nsec): min=7354, max=87994, avg=18556.58, stdev=10037.80 00:37:28.493 clat (usec): min=843, max=8009, avg=4352.50, stdev=681.28 00:37:28.493 lat (usec): min=857, max=8024, avg=4371.06, stdev=681.17 00:37:28.493 clat percentiles (usec): 00:37:28.493 | 1.00th=[ 2278], 5.00th=[ 3556], 10.00th=[ 3818], 20.00th=[ 4047], 00:37:28.493 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:37:28.493 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 5014], 95.00th=[ 5604], 00:37:28.493 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 7701], 99.95th=[ 7767], 00:37:28.493 | 99.99th=[ 8029] 00:37:28.493 bw ( KiB/s): min=14048, max=14944, per=24.53%, avg=14481.78, stdev=251.26, samples=9 00:37:28.493 iops : min= 1756, max= 1868, avg=1810.22, stdev=31.41, samples=9 00:37:28.493 lat (usec) : 1000=0.19% 00:37:28.493 lat (msec) : 2=0.59%, 4=15.55%, 10=83.67% 00:37:28.493 cpu : usr=95.00%, sys=4.50%, ctx=10, majf=0, minf=69 00:37:28.493 IO depths : 1=0.1%, 2=14.5%, 4=58.0%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:28.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.493 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.493 issued rwts: total=9053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.493 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:28.493 00:37:28.493 Run status group 0 (all jobs): 00:37:28.493 READ: bw=57.6MiB/s (60.4MB/s), 14.1MiB/s-14.9MiB/s (14.8MB/s-15.6MB/s), io=288MiB (302MB), run=5001-5004msec 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.493 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.494 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.494 00:37:28.494 real 0m24.948s 00:37:28.494 user 4m32.681s 00:37:28.494 sys 0m6.593s 00:37:28.494 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:28.494 18:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.494 ************************************ 00:37:28.494 END TEST fio_dif_rand_params 00:37:28.494 ************************************ 00:37:28.494 18:33:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:28.494 18:33:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:28.494 18:33:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:28.494 18:33:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:28.494 ************************************ 00:37:28.494 START TEST fio_dif_digest 00:37:28.494 ************************************ 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.494 bdev_null0 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:28.494 [2024-11-26 18:33:15.749992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.494 { 00:37:28.494 "params": { 00:37:28.494 "name": "Nvme$subsystem", 00:37:28.494 "trtype": "$TEST_TRANSPORT", 00:37:28.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.494 "adrfam": "ipv4", 00:37:28.494 "trsvcid": "$NVMF_PORT", 00:37:28.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.494 "hdgst": ${hdgst:-false}, 00:37:28.494 "ddgst": ${ddgst:-false} 00:37:28.494 }, 00:37:28.494 "method": "bdev_nvme_attach_controller" 00:37:28.494 } 00:37:28.494 EOF 00:37:28.494 )") 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.494 "params": { 00:37:28.494 "name": "Nvme0", 00:37:28.494 "trtype": "tcp", 00:37:28.494 "traddr": "10.0.0.2", 00:37:28.494 "adrfam": "ipv4", 00:37:28.494 "trsvcid": "4420", 00:37:28.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.494 "hdgst": true, 00:37:28.494 "ddgst": true 00:37:28.494 }, 00:37:28.494 "method": "bdev_nvme_attach_controller" 00:37:28.494 }' 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:28.494 18:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.494 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:28.494 ... 00:37:28.494 fio-3.35 00:37:28.494 Starting 3 threads 00:37:40.690 00:37:40.690 filename0: (groupid=0, jobs=1): err= 0: pid=792095: Tue Nov 26 18:33:26 2024 00:37:40.690 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(262MiB/10047msec) 00:37:40.690 slat (usec): min=4, max=101, avg=16.31, stdev= 3.09 00:37:40.690 clat (usec): min=10792, max=52603, avg=14368.04, stdev=1537.20 00:37:40.690 lat (usec): min=10812, max=52618, avg=14384.35, stdev=1537.12 00:37:40.690 clat percentiles (usec): 00:37:40.690 | 1.00th=[12125], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:37:40.690 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:37:40.690 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:37:40.690 | 99.00th=[17171], 99.50th=[17433], 99.90th=[20841], 99.95th=[51119], 00:37:40.691 | 99.99th=[52691] 00:37:40.691 bw ( KiB/s): min=26112, max=27392, per=34.35%, avg=26739.20, stdev=304.89, samples=20 00:37:40.691 iops : min= 204, max= 214, avg=208.90, stdev= 2.38, samples=20 00:37:40.691 lat (msec) : 20=99.76%, 50=0.14%, 100=0.10% 00:37:40.691 cpu : usr=93.24%, sys=6.19%, ctx=20, majf=0, minf=155 00:37:40.691 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.691 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.691 filename0: (groupid=0, jobs=1): err= 0: pid=792096: Tue Nov 26 18:33:26 2024 00:37:40.691 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10046msec) 00:37:40.691 slat (nsec): min=4662, max=30630, avg=14649.26, stdev=1511.00 00:37:40.691 clat (usec): min=11523, max=52277, avg=14926.47, stdev=1497.85 00:37:40.691 lat (usec): min=11538, max=52291, avg=14941.12, stdev=1497.82 00:37:40.691 clat percentiles (usec): 00:37:40.691 | 1.00th=[12649], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:37:40.691 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:37:40.691 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:37:40.691 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19006], 99.95th=[48497], 00:37:40.691 | 99.99th=[52167] 00:37:40.691 bw ( KiB/s): min=25344, max=26368, per=33.08%, avg=25753.60, stdev=267.85, samples=20 00:37:40.691 iops : min= 198, max= 206, avg=201.20, stdev= 2.09, samples=20 00:37:40.691 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:37:40.691 cpu : usr=93.98%, sys=5.55%, ctx=22, majf=0, minf=110 00:37:40.691 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.691 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.691 filename0: (groupid=0, jobs=1): err= 0: pid=792097: Tue Nov 26 18:33:26 2024 00:37:40.691 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(251MiB/10047msec) 00:37:40.691 slat (nsec): min=4261, max=95006, avg=14634.63, stdev=2152.92 00:37:40.691 clat (usec): min=12030, max=52229, avg=15001.68, stdev=1474.09 00:37:40.691 lat (usec): min=12045, max=52243, avg=15016.32, stdev=1474.05 00:37:40.691 clat percentiles (usec): 00:37:40.691 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:37:40.691 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:37:40.691 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:37:40.691 | 99.00th=[17695], 99.50th=[17957], 99.90th=[20055], 99.95th=[48497], 00:37:40.691 | 99.99th=[52167] 00:37:40.691 bw ( KiB/s): min=24832, max=26368, per=32.92%, avg=25628.05, stdev=383.54, samples=20 00:37:40.691 iops : min= 194, max= 206, avg=200.20, stdev= 3.04, samples=20 00:37:40.691 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:37:40.691 cpu : usr=93.99%, sys=5.53%, ctx=14, majf=0, minf=117 00:37:40.691 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:40.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:40.691 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:40.691 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:40.691 00:37:40.691 Run status group 0 (all jobs): 00:37:40.691 READ: bw=76.0MiB/s (79.7MB/s), 24.9MiB/s-26.0MiB/s (26.1MB/s-27.3MB/s), io=764MiB (801MB), run=10046-10047msec 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.691 00:37:40.691 real 0m11.266s 00:37:40.691 user 0m29.393s 00:37:40.691 sys 0m2.060s 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:40.691 18:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:40.691 ************************************ 00:37:40.691 END TEST fio_dif_digest 00:37:40.691 ************************************ 00:37:40.691 18:33:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:40.691 18:33:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:40.691 18:33:26 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:40.691 18:33:26 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:40.691 rmmod nvme_tcp 00:37:40.691 rmmod nvme_fabrics 00:37:40.691 rmmod nvme_keyring 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 785344 ']' 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 785344 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 785344 ']' 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 785344 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785344 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785344' 00:37:40.691 killing process with pid 785344 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@973 -- # kill 785344 00:37:40.691 18:33:27 nvmf_dif -- common/autotest_common.sh@978 -- # wait 785344 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:40.691 18:33:27 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:40.691 Waiting for block devices as requested 00:37:40.691 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:40.691 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:40.950 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:40.950 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:40.950 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:40.950 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:40.950 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:41.208 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:41.208 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:37:41.465 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:37:41.465 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:37:41.465 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:37:41.465 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:37:41.723 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:37:41.723 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:37:41.723 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:37:41.981 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:41.981 18:33:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:41.981 18:33:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:41.981 18:33:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:43.915 18:33:31 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:43.915 00:37:43.915 real 1m8.037s 00:37:43.915 user 6m31.198s 00:37:43.915 sys 0m17.594s 00:37:43.915 18:33:31 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:43.915 18:33:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:43.915 ************************************ 00:37:43.915 END TEST nvmf_dif 00:37:43.915 ************************************ 00:37:44.174 18:33:31 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:44.174 18:33:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:44.174 18:33:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:44.174 18:33:31 -- common/autotest_common.sh@10 -- # set +x 00:37:44.174 ************************************ 00:37:44.174 START TEST nvmf_abort_qd_sizes 00:37:44.174 ************************************ 00:37:44.174 18:33:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:44.174 * Looking for test storage... 00:37:44.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:44.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.174 --rc genhtml_branch_coverage=1 00:37:44.174 --rc genhtml_function_coverage=1 00:37:44.174 --rc genhtml_legend=1 00:37:44.174 --rc geninfo_all_blocks=1 00:37:44.174 --rc geninfo_unexecuted_blocks=1 00:37:44.174 00:37:44.174 ' 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:44.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.174 --rc genhtml_branch_coverage=1 00:37:44.174 --rc genhtml_function_coverage=1 00:37:44.174 --rc genhtml_legend=1 00:37:44.174 --rc geninfo_all_blocks=1 00:37:44.174 --rc geninfo_unexecuted_blocks=1 00:37:44.174 00:37:44.174 ' 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:44.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.174 --rc genhtml_branch_coverage=1 00:37:44.174 --rc genhtml_function_coverage=1 00:37:44.174 --rc genhtml_legend=1 00:37:44.174 --rc geninfo_all_blocks=1 00:37:44.174 --rc geninfo_unexecuted_blocks=1 00:37:44.174 00:37:44.174 ' 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:44.174 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:44.174 --rc genhtml_branch_coverage=1 00:37:44.174 --rc genhtml_function_coverage=1 00:37:44.174 --rc genhtml_legend=1 00:37:44.174 --rc geninfo_all_blocks=1 00:37:44.174 --rc geninfo_unexecuted_blocks=1 00:37:44.174 00:37:44.174 ' 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.174 18:33:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:44.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:44.175 18:33:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:37:46.703 Found 0000:09:00.0 (0x8086 - 0x159b) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:37:46.703 Found 0000:09:00.1 (0x8086 - 0x159b) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:37:46.703 Found net devices under 0000:09:00.0: cvl_0_0 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:37:46.703 Found net devices under 0000:09:00.1: cvl_0_1 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:46.703 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:46.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:46.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:37:46.704 00:37:46.704 --- 10.0.0.2 ping statistics --- 00:37:46.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.704 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:46.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:46.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:37:46.704 00:37:46.704 --- 10.0.0.1 ping statistics --- 00:37:46.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:46.704 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:46.704 18:33:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:47.639 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:47.639 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:47.639 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:47.639 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:47.639 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:47.639 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:47.900 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:47.900 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:47.900 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:37:47.900 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:37:47.900 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:37:47.900 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:37:47.900 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:37:47.900 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:37:47.900 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:37:47.900 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:37:48.840 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=796948 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 796948 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 796948 ']' 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:48.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:48.840 18:33:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.099 [2024-11-26 18:33:36.893750] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:37:49.099 [2024-11-26 18:33:36.893861] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:49.099 [2024-11-26 18:33:36.965278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:49.099 [2024-11-26 18:33:37.022730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:49.099 [2024-11-26 18:33:37.022785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:49.099 [2024-11-26 18:33:37.022808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:49.099 [2024-11-26 18:33:37.022819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:49.099 [2024-11-26 18:33:37.022828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:49.099 [2024-11-26 18:33:37.024249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:49.099 [2024-11-26 18:33:37.024319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:49.099 [2024-11-26 18:33:37.024379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:49.099 [2024-11-26 18:33:37.024382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.357 18:33:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:49.357 ************************************ 00:37:49.357 START TEST spdk_target_abort 00:37:49.357 ************************************ 00:37:49.357 18:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:49.357 18:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:49.357 18:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:37:49.357 18:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.357 18:33:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.641 spdk_targetn1 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.641 [2024-11-26 18:33:40.046461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:52.641 [2024-11-26 18:33:40.090778] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:52.641 18:33:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:55.228 Initializing NVMe Controllers 00:37:55.228 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:55.228 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:55.228 Initialization complete. Launching workers. 00:37:55.228 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13400, failed: 0 00:37:55.228 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1189, failed to submit 12211 00:37:55.228 success 751, unsuccessful 438, failed 0 00:37:55.228 18:33:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:55.228 18:33:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:59.407 Initializing NVMe Controllers 00:37:59.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:59.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:59.407 Initialization complete. Launching workers. 00:37:59.407 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8905, failed: 0 00:37:59.407 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 7642 00:37:59.407 success 324, unsuccessful 939, failed 0 00:37:59.407 18:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:59.407 18:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:01.937 Initializing NVMe Controllers 00:38:01.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:01.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:01.937 Initialization complete. Launching workers. 00:38:01.937 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31317, failed: 0 00:38:01.937 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2569, failed to submit 28748 00:38:01.937 success 509, unsuccessful 2060, failed 0 00:38:01.937 18:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:01.937 18:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.937 18:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:01.937 18:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.937 18:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:01.937 18:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.937 18:33:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 796948 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 796948 ']' 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 796948 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 796948 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 796948' 00:38:03.309 killing process with pid 796948 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 796948 00:38:03.309 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 796948 00:38:03.568 00:38:03.568 real 0m14.289s 00:38:03.568 user 0m54.220s 00:38:03.568 sys 0m2.667s 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:03.568 ************************************ 00:38:03.568 END TEST spdk_target_abort 00:38:03.568 ************************************ 00:38:03.568 18:33:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:03.568 18:33:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:03.568 18:33:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:03.568 18:33:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:03.568 ************************************ 00:38:03.568 START TEST kernel_target_abort 00:38:03.568 ************************************ 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:03.568 18:33:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:04.942 Waiting for block devices as requested 00:38:04.942 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:04.942 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:04.942 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:04.942 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:05.201 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:05.201 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:05.201 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:05.201 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:05.461 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:38:05.461 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:05.719 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:05.719 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:05.719 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:05.719 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:05.978 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:05.978 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:05.978 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:06.237 No valid GPT data, bailing 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:06.237 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:38:06.237 00:38:06.237 Discovery Log Number of Records 2, Generation counter 2 00:38:06.237 =====Discovery Log Entry 0====== 00:38:06.237 trtype: tcp 00:38:06.237 adrfam: ipv4 00:38:06.237 subtype: current discovery subsystem 00:38:06.237 treq: not specified, sq flow control disable supported 00:38:06.237 portid: 1 00:38:06.237 trsvcid: 4420 00:38:06.237 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:06.237 traddr: 10.0.0.1 00:38:06.237 eflags: none 00:38:06.237 sectype: none 00:38:06.237 =====Discovery Log Entry 1====== 00:38:06.237 trtype: tcp 00:38:06.237 adrfam: ipv4 00:38:06.237 subtype: nvme subsystem 00:38:06.237 treq: not specified, sq flow control disable supported 00:38:06.237 portid: 1 00:38:06.237 trsvcid: 4420 00:38:06.237 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:06.237 traddr: 10.0.0.1 00:38:06.237 eflags: none 00:38:06.238 sectype: none 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:06.238 18:33:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.523 Initializing NVMe Controllers 00:38:09.523 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:09.523 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:09.523 Initialization complete. Launching workers. 00:38:09.523 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48668, failed: 0 00:38:09.523 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48668, failed to submit 0 00:38:09.523 success 0, unsuccessful 48668, failed 0 00:38:09.523 18:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:09.523 18:33:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:12.804 Initializing NVMe Controllers 00:38:12.804 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:12.805 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:12.805 Initialization complete. Launching workers. 00:38:12.805 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95673, failed: 0 00:38:12.805 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21242, failed to submit 74431 00:38:12.805 success 0, unsuccessful 21242, failed 0 00:38:12.805 18:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:12.805 18:34:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:16.090 Initializing NVMe Controllers 00:38:16.090 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:16.090 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:16.090 Initialization complete. Launching workers. 00:38:16.090 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87573, failed: 0 00:38:16.090 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21874, failed to submit 65699 00:38:16.090 success 0, unsuccessful 21874, failed 0 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:16.090 18:34:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:17.023 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:17.023 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:17.023 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:17.023 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:17.023 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:17.023 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:17.023 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:17.023 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:17.023 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:38:17.023 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:38:17.023 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:38:17.023 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:38:17.023 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:38:17.023 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:38:17.023 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:38:17.023 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:38:17.959 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:38:18.218 00:38:18.218 real 0m14.446s 00:38:18.218 user 0m6.089s 00:38:18.218 sys 0m3.528s 00:38:18.218 18:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:18.218 18:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:18.218 ************************************ 00:38:18.218 END TEST kernel_target_abort 00:38:18.218 ************************************ 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:18.218 rmmod nvme_tcp 00:38:18.218 rmmod nvme_fabrics 00:38:18.218 rmmod nvme_keyring 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 796948 ']' 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 796948 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 796948 ']' 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 796948 00:38:18.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (796948) - No such process 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 796948 is not found' 00:38:18.218 Process with pid 796948 is not found 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:18.218 18:34:06 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:19.590 Waiting for block devices as requested 00:38:19.591 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:19.591 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:19.591 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:19.591 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:19.591 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:19.849 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:19.849 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:19.849 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:19.849 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:38:20.113 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:38:20.113 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:38:20.372 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:38:20.372 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:38:20.372 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:38:20.372 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:38:20.630 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:38:20.630 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:20.630 18:34:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.229 18:34:10 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:23.229 00:38:23.229 real 0m38.686s 00:38:23.229 user 1m2.653s 00:38:23.229 sys 0m9.942s 00:38:23.229 18:34:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.229 18:34:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:23.229 ************************************ 00:38:23.229 END TEST nvmf_abort_qd_sizes 00:38:23.229 ************************************ 00:38:23.229 18:34:10 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:23.229 18:34:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:23.229 18:34:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.229 18:34:10 -- common/autotest_common.sh@10 -- # set +x 00:38:23.229 ************************************ 00:38:23.229 START TEST keyring_file 00:38:23.229 ************************************ 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:23.229 * Looking for test storage... 00:38:23.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.229 --rc genhtml_branch_coverage=1 00:38:23.229 --rc genhtml_function_coverage=1 00:38:23.229 --rc genhtml_legend=1 00:38:23.229 --rc geninfo_all_blocks=1 00:38:23.229 --rc geninfo_unexecuted_blocks=1 00:38:23.229 00:38:23.229 ' 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.229 --rc genhtml_branch_coverage=1 00:38:23.229 --rc genhtml_function_coverage=1 00:38:23.229 --rc genhtml_legend=1 00:38:23.229 --rc geninfo_all_blocks=1 00:38:23.229 --rc geninfo_unexecuted_blocks=1 00:38:23.229 00:38:23.229 ' 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.229 --rc genhtml_branch_coverage=1 00:38:23.229 --rc genhtml_function_coverage=1 00:38:23.229 --rc genhtml_legend=1 00:38:23.229 --rc geninfo_all_blocks=1 00:38:23.229 --rc geninfo_unexecuted_blocks=1 00:38:23.229 00:38:23.229 ' 00:38:23.229 18:34:10 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:23.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.229 --rc genhtml_branch_coverage=1 00:38:23.229 --rc genhtml_function_coverage=1 00:38:23.229 --rc genhtml_legend=1 00:38:23.229 --rc geninfo_all_blocks=1 00:38:23.229 --rc geninfo_unexecuted_blocks=1 00:38:23.229 00:38:23.229 ' 00:38:23.229 18:34:10 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:23.229 18:34:10 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:23.229 18:34:10 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.229 18:34:10 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.229 18:34:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.230 18:34:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.230 18:34:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.230 18:34:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:23.230 18:34:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:23.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1dAMI93dC3 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1dAMI93dC3 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1dAMI93dC3 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.1dAMI93dC3 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lJNFk1sfrY 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:23.230 18:34:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lJNFk1sfrY 00:38:23.230 18:34:10 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lJNFk1sfrY 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lJNFk1sfrY 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@30 -- # tgtpid=802733 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:23.230 18:34:10 keyring_file -- keyring/file.sh@32 -- # waitforlisten 802733 00:38:23.230 18:34:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 802733 ']' 00:38:23.230 18:34:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.230 18:34:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.230 18:34:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.230 18:34:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.230 18:34:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:23.230 [2024-11-26 18:34:10.971643] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:38:23.230 [2024-11-26 18:34:10.971731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802733 ] 00:38:23.230 [2024-11-26 18:34:11.037045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.230 [2024-11-26 18:34:11.094352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:23.488 18:34:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:23.488 [2024-11-26 18:34:11.347251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:23.488 null0 00:38:23.488 [2024-11-26 18:34:11.379332] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:23.488 [2024-11-26 18:34:11.379804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:23.488 18:34:11 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:23.488 [2024-11-26 18:34:11.403373] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:23.488 request: 00:38:23.488 { 00:38:23.488 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.488 "secure_channel": false, 00:38:23.488 "listen_address": { 00:38:23.488 "trtype": "tcp", 00:38:23.488 "traddr": "127.0.0.1", 00:38:23.488 "trsvcid": "4420" 00:38:23.488 }, 00:38:23.488 "method": "nvmf_subsystem_add_listener", 00:38:23.488 "req_id": 1 00:38:23.488 } 00:38:23.488 Got JSON-RPC error response 00:38:23.488 response: 00:38:23.488 { 00:38:23.488 "code": -32602, 00:38:23.488 "message": "Invalid parameters" 00:38:23.488 } 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:23.488 18:34:11 keyring_file -- keyring/file.sh@47 -- # bperfpid=802748 00:38:23.488 18:34:11 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:23.488 18:34:11 keyring_file -- keyring/file.sh@49 -- # waitforlisten 802748 /var/tmp/bperf.sock 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 802748 ']' 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:23.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.488 18:34:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:23.488 [2024-11-26 18:34:11.450761] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:38:23.488 [2024-11-26 18:34:11.450835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802748 ] 00:38:23.746 [2024-11-26 18:34:11.515999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.746 [2024-11-26 18:34:11.574269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:23.746 18:34:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.746 18:34:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:23.746 18:34:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1dAMI93dC3 00:38:23.746 18:34:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1dAMI93dC3 00:38:24.004 18:34:11 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lJNFk1sfrY 00:38:24.004 18:34:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lJNFk1sfrY 00:38:24.261 18:34:12 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:24.261 18:34:12 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:24.261 18:34:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.261 18:34:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.261 18:34:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.519 18:34:12 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.1dAMI93dC3 == \/\t\m\p\/\t\m\p\.\1\d\A\M\I\9\3\d\C\3 ]] 00:38:24.519 18:34:12 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:24.519 18:34:12 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:24.519 18:34:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.519 18:34:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:24.519 18:34:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.777 18:34:12 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.lJNFk1sfrY == \/\t\m\p\/\t\m\p\.\l\J\N\F\k\1\s\f\r\Y ]] 00:38:24.777 18:34:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:24.777 18:34:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.777 18:34:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.777 18:34:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.777 18:34:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.777 18:34:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.341 18:34:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:25.341 18:34:13 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:25.341 18:34:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:25.341 18:34:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.341 18:34:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.341 18:34:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.341 18:34:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:25.341 18:34:13 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:25.341 18:34:13 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.341 18:34:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:25.599 [2024-11-26 18:34:13.567622] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:25.857 nvme0n1 00:38:25.857 18:34:13 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:25.857 18:34:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:25.857 18:34:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:25.857 18:34:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:25.857 18:34:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:25.857 18:34:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.115 18:34:13 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:26.115 18:34:13 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:26.115 18:34:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:26.115 18:34:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:26.115 18:34:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:26.115 18:34:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:26.115 18:34:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.373 18:34:14 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:26.373 18:34:14 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:26.373 Running I/O for 1 seconds... 00:38:27.567 10196.00 IOPS, 39.83 MiB/s 00:38:27.567 Latency(us) 00:38:27.567 [2024-11-26T17:34:15.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:27.567 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:27.567 nvme0n1 : 1.01 10234.30 39.98 0.00 0.00 12462.88 6941.96 21165.70 00:38:27.567 [2024-11-26T17:34:15.578Z] =================================================================================================================== 00:38:27.567 [2024-11-26T17:34:15.578Z] Total : 10234.30 39.98 0.00 0.00 12462.88 6941.96 21165.70 00:38:27.567 { 00:38:27.567 "results": [ 00:38:27.567 { 00:38:27.567 "job": "nvme0n1", 00:38:27.567 "core_mask": "0x2", 00:38:27.567 "workload": "randrw", 00:38:27.567 "percentage": 50, 00:38:27.567 "status": "finished", 00:38:27.567 "queue_depth": 128, 00:38:27.567 "io_size": 4096, 00:38:27.567 "runtime": 1.008765, 00:38:27.567 "iops": 10234.296392123042, 00:38:27.567 "mibps": 39.97772028173063, 00:38:27.567 "io_failed": 0, 00:38:27.567 "io_timeout": 0, 00:38:27.567 "avg_latency_us": 12462.88069109016, 00:38:27.567 "min_latency_us": 6941.961481481481, 00:38:27.567 "max_latency_us": 21165.70074074074 00:38:27.567 } 00:38:27.567 ], 00:38:27.567 "core_count": 1 00:38:27.567 } 00:38:27.567 18:34:15 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:27.567 18:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:27.826 18:34:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:27.826 18:34:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:27.826 18:34:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.826 18:34:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.826 18:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.826 18:34:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:28.084 18:34:15 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:28.084 18:34:15 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:28.084 18:34:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:28.084 18:34:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:28.084 18:34:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.084 18:34:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:28.084 18:34:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.342 18:34:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:28.342 18:34:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:28.342 18:34:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:28.342 18:34:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:28.342 18:34:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:28.342 18:34:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:28.342 18:34:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:28.342 18:34:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:28.342 18:34:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:28.342 18:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:28.600 [2024-11-26 18:34:16.418374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:28.600 [2024-11-26 18:34:16.418384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaae530 (107): Transport endpoint is not connected 00:38:28.600 [2024-11-26 18:34:16.419377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaae530 (9): Bad file descriptor 00:38:28.600 [2024-11-26 18:34:16.420376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:28.600 [2024-11-26 18:34:16.420400] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:28.600 [2024-11-26 18:34:16.420421] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:28.600 [2024-11-26 18:34:16.420446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:28.600 request: 00:38:28.600 { 00:38:28.600 "name": "nvme0", 00:38:28.600 "trtype": "tcp", 00:38:28.600 "traddr": "127.0.0.1", 00:38:28.600 "adrfam": "ipv4", 00:38:28.600 "trsvcid": "4420", 00:38:28.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:28.600 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:28.600 "prchk_reftag": false, 00:38:28.600 "prchk_guard": false, 00:38:28.600 "hdgst": false, 00:38:28.600 "ddgst": false, 00:38:28.600 "psk": "key1", 00:38:28.600 "allow_unrecognized_csi": false, 00:38:28.600 "method": "bdev_nvme_attach_controller", 00:38:28.600 "req_id": 1 00:38:28.600 } 00:38:28.600 Got JSON-RPC error response 00:38:28.600 response: 00:38:28.600 { 00:38:28.600 "code": -5, 00:38:28.600 "message": "Input/output error" 00:38:28.600 } 00:38:28.600 18:34:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:28.600 18:34:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:28.600 18:34:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:28.600 18:34:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:28.600 18:34:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:28.600 18:34:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:28.600 18:34:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:28.600 18:34:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.600 18:34:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:28.600 18:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.859 18:34:16 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:28.859 18:34:16 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:28.859 18:34:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:28.859 18:34:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:28.859 18:34:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:28.859 18:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.859 18:34:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:29.117 18:34:16 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:29.117 18:34:16 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:29.117 18:34:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:29.375 18:34:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:29.375 18:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:29.633 18:34:17 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:29.633 18:34:17 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:29.633 18:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:29.891 18:34:17 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:29.891 18:34:17 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.1dAMI93dC3 00:38:29.891 18:34:17 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.1dAMI93dC3 00:38:29.891 18:34:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:29.891 18:34:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.1dAMI93dC3 00:38:29.891 18:34:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:29.891 18:34:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:29.891 18:34:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:29.891 18:34:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:29.891 18:34:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1dAMI93dC3 00:38:29.891 18:34:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1dAMI93dC3 00:38:30.153 [2024-11-26 18:34:18.057537] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1dAMI93dC3': 0100660 00:38:30.153 [2024-11-26 18:34:18.057571] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:30.153 request: 00:38:30.153 { 00:38:30.153 "name": "key0", 00:38:30.153 "path": "/tmp/tmp.1dAMI93dC3", 00:38:30.153 "method": "keyring_file_add_key", 00:38:30.153 "req_id": 1 00:38:30.153 } 00:38:30.153 Got JSON-RPC error response 00:38:30.153 response: 00:38:30.153 { 00:38:30.153 "code": -1, 00:38:30.153 "message": "Operation not permitted" 00:38:30.153 } 00:38:30.153 18:34:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:30.153 18:34:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:30.153 18:34:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:30.153 18:34:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:30.153 18:34:18 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.1dAMI93dC3 00:38:30.153 18:34:18 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1dAMI93dC3 00:38:30.153 18:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1dAMI93dC3 00:38:30.412 18:34:18 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.1dAMI93dC3 00:38:30.412 18:34:18 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:30.412 18:34:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:30.412 18:34:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:30.412 18:34:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.412 18:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.412 18:34:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:30.670 18:34:18 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:30.670 18:34:18 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.670 18:34:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:30.670 18:34:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.670 18:34:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:30.670 18:34:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.670 18:34:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:30.670 18:34:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.670 18:34:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.670 18:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:30.928 [2024-11-26 18:34:18.883779] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.1dAMI93dC3': No such file or directory 00:38:30.928 [2024-11-26 18:34:18.883820] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:30.928 [2024-11-26 18:34:18.883851] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:30.928 [2024-11-26 18:34:18.883872] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:30.928 [2024-11-26 18:34:18.883892] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:30.928 [2024-11-26 18:34:18.883908] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:30.928 request: 00:38:30.928 { 00:38:30.928 "name": "nvme0", 00:38:30.928 "trtype": "tcp", 00:38:30.928 "traddr": "127.0.0.1", 00:38:30.928 "adrfam": "ipv4", 00:38:30.928 "trsvcid": "4420", 00:38:30.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:30.928 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:30.928 "prchk_reftag": false, 00:38:30.928 "prchk_guard": false, 00:38:30.928 "hdgst": false, 00:38:30.928 "ddgst": false, 00:38:30.928 "psk": "key0", 00:38:30.928 "allow_unrecognized_csi": false, 00:38:30.928 "method": "bdev_nvme_attach_controller", 00:38:30.928 "req_id": 1 00:38:30.928 } 00:38:30.928 Got JSON-RPC error response 00:38:30.928 response: 00:38:30.928 { 00:38:30.928 "code": -19, 00:38:30.928 "message": "No such device" 00:38:30.928 } 00:38:30.928 18:34:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:30.928 18:34:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:30.928 18:34:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:30.928 18:34:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:30.928 18:34:18 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:30.928 18:34:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:31.186 18:34:19 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:31.186 18:34:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:31.186 18:34:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:31.186 18:34:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:31.187 18:34:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:31.187 18:34:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:31.187 18:34:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3d4aJ9btC2 00:38:31.187 18:34:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:31.187 18:34:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:31.187 18:34:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:31.187 18:34:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:31.187 18:34:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:31.187 18:34:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:31.187 18:34:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:31.443 18:34:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3d4aJ9btC2 00:38:31.443 18:34:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3d4aJ9btC2 00:38:31.443 18:34:19 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.3d4aJ9btC2 00:38:31.443 18:34:19 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3d4aJ9btC2 00:38:31.443 18:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3d4aJ9btC2 00:38:31.701 18:34:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:31.701 18:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:31.960 nvme0n1 00:38:31.960 18:34:19 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:31.960 18:34:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:31.960 18:34:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:31.960 18:34:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:31.960 18:34:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:31.960 18:34:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.219 18:34:20 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:32.219 18:34:20 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:32.219 18:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:32.477 18:34:20 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:32.477 18:34:20 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:32.477 18:34:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.477 18:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.477 18:34:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:32.736 18:34:20 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:32.736 18:34:20 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:32.736 18:34:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:32.736 18:34:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:32.736 18:34:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:32.736 18:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:32.736 18:34:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:32.994 18:34:20 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:32.994 18:34:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:32.994 18:34:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:33.252 18:34:21 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:33.252 18:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.252 18:34:21 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:33.510 18:34:21 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:33.510 18:34:21 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3d4aJ9btC2 00:38:33.510 18:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3d4aJ9btC2 00:38:33.768 18:34:21 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lJNFk1sfrY 00:38:33.768 18:34:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lJNFk1sfrY 00:38:34.336 18:34:22 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:34.336 18:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:34.595 nvme0n1 00:38:34.595 18:34:22 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:34.595 18:34:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:34.854 18:34:22 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:34.854 "subsystems": [ 00:38:34.854 { 00:38:34.854 "subsystem": "keyring", 00:38:34.854 "config": [ 00:38:34.854 { 00:38:34.854 "method": "keyring_file_add_key", 00:38:34.854 "params": { 00:38:34.854 "name": "key0", 00:38:34.854 "path": "/tmp/tmp.3d4aJ9btC2" 00:38:34.854 } 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "method": "keyring_file_add_key", 00:38:34.854 "params": { 00:38:34.854 "name": "key1", 00:38:34.854 "path": "/tmp/tmp.lJNFk1sfrY" 00:38:34.854 } 00:38:34.854 } 00:38:34.854 ] 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "subsystem": "iobuf", 00:38:34.854 "config": [ 00:38:34.854 { 00:38:34.854 "method": "iobuf_set_options", 00:38:34.854 "params": { 00:38:34.854 "small_pool_count": 8192, 00:38:34.854 "large_pool_count": 1024, 00:38:34.854 "small_bufsize": 8192, 00:38:34.854 "large_bufsize": 135168, 00:38:34.854 "enable_numa": false 00:38:34.854 } 00:38:34.854 } 00:38:34.854 ] 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "subsystem": "sock", 00:38:34.854 "config": [ 00:38:34.854 { 00:38:34.854 "method": "sock_set_default_impl", 00:38:34.854 "params": { 00:38:34.854 "impl_name": "posix" 00:38:34.854 } 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "method": "sock_impl_set_options", 00:38:34.854 "params": { 00:38:34.854 "impl_name": "ssl", 00:38:34.854 "recv_buf_size": 4096, 00:38:34.854 "send_buf_size": 4096, 00:38:34.854 "enable_recv_pipe": true, 00:38:34.854 "enable_quickack": false, 00:38:34.854 "enable_placement_id": 0, 00:38:34.854 "enable_zerocopy_send_server": true, 00:38:34.854 "enable_zerocopy_send_client": false, 00:38:34.854 "zerocopy_threshold": 0, 00:38:34.854 "tls_version": 0, 00:38:34.854 "enable_ktls": false 00:38:34.854 } 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "method": "sock_impl_set_options", 00:38:34.854 "params": { 00:38:34.854 "impl_name": "posix", 00:38:34.854 "recv_buf_size": 2097152, 00:38:34.854 "send_buf_size": 2097152, 00:38:34.854 "enable_recv_pipe": true, 00:38:34.854 "enable_quickack": false, 00:38:34.854 "enable_placement_id": 0, 00:38:34.854 "enable_zerocopy_send_server": true, 00:38:34.854 "enable_zerocopy_send_client": false, 00:38:34.854 "zerocopy_threshold": 0, 00:38:34.854 "tls_version": 0, 00:38:34.854 "enable_ktls": false 00:38:34.854 } 00:38:34.854 } 00:38:34.854 ] 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "subsystem": "vmd", 00:38:34.854 "config": [] 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "subsystem": "accel", 00:38:34.854 "config": [ 00:38:34.854 { 00:38:34.854 "method": "accel_set_options", 00:38:34.854 "params": { 00:38:34.854 "small_cache_size": 128, 00:38:34.854 "large_cache_size": 16, 00:38:34.854 "task_count": 2048, 00:38:34.854 "sequence_count": 2048, 00:38:34.854 "buf_count": 2048 00:38:34.854 } 00:38:34.854 } 00:38:34.854 ] 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "subsystem": "bdev", 00:38:34.854 "config": [ 00:38:34.854 { 00:38:34.854 "method": "bdev_set_options", 00:38:34.854 "params": { 00:38:34.854 "bdev_io_pool_size": 65535, 00:38:34.854 "bdev_io_cache_size": 256, 00:38:34.854 "bdev_auto_examine": true, 00:38:34.854 "iobuf_small_cache_size": 128, 00:38:34.854 "iobuf_large_cache_size": 16 00:38:34.854 } 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "method": "bdev_raid_set_options", 00:38:34.854 "params": { 00:38:34.854 "process_window_size_kb": 1024, 00:38:34.854 "process_max_bandwidth_mb_sec": 0 00:38:34.854 } 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "method": "bdev_iscsi_set_options", 00:38:34.854 "params": { 00:38:34.854 "timeout_sec": 30 00:38:34.854 } 00:38:34.854 }, 00:38:34.854 { 00:38:34.854 "method": "bdev_nvme_set_options", 00:38:34.854 "params": { 00:38:34.854 "action_on_timeout": "none", 00:38:34.854 "timeout_us": 0, 00:38:34.855 "timeout_admin_us": 0, 00:38:34.855 "keep_alive_timeout_ms": 10000, 00:38:34.855 "arbitration_burst": 0, 00:38:34.855 "low_priority_weight": 0, 00:38:34.855 "medium_priority_weight": 0, 00:38:34.855 "high_priority_weight": 0, 00:38:34.855 "nvme_adminq_poll_period_us": 10000, 00:38:34.855 "nvme_ioq_poll_period_us": 0, 00:38:34.855 "io_queue_requests": 512, 00:38:34.855 "delay_cmd_submit": true, 00:38:34.855 "transport_retry_count": 4, 00:38:34.855 "bdev_retry_count": 3, 00:38:34.855 "transport_ack_timeout": 0, 00:38:34.855 "ctrlr_loss_timeout_sec": 0, 00:38:34.855 "reconnect_delay_sec": 0, 00:38:34.855 "fast_io_fail_timeout_sec": 0, 00:38:34.855 "disable_auto_failback": false, 00:38:34.855 "generate_uuids": false, 00:38:34.855 "transport_tos": 0, 00:38:34.855 "nvme_error_stat": false, 00:38:34.855 "rdma_srq_size": 0, 00:38:34.855 "io_path_stat": false, 00:38:34.855 "allow_accel_sequence": false, 00:38:34.855 "rdma_max_cq_size": 0, 00:38:34.855 "rdma_cm_event_timeout_ms": 0, 00:38:34.855 "dhchap_digests": [ 00:38:34.855 "sha256", 00:38:34.855 "sha384", 00:38:34.855 "sha512" 00:38:34.855 ], 00:38:34.855 "dhchap_dhgroups": [ 00:38:34.855 "null", 00:38:34.855 "ffdhe2048", 00:38:34.855 "ffdhe3072", 00:38:34.855 "ffdhe4096", 00:38:34.855 "ffdhe6144", 00:38:34.855 "ffdhe8192" 00:38:34.855 ] 00:38:34.855 } 00:38:34.855 }, 00:38:34.855 { 00:38:34.855 "method": "bdev_nvme_attach_controller", 00:38:34.855 "params": { 00:38:34.855 "name": "nvme0", 00:38:34.855 "trtype": "TCP", 00:38:34.855 "adrfam": "IPv4", 00:38:34.855 "traddr": "127.0.0.1", 00:38:34.855 "trsvcid": "4420", 00:38:34.855 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:34.855 "prchk_reftag": false, 00:38:34.855 "prchk_guard": false, 00:38:34.855 "ctrlr_loss_timeout_sec": 0, 00:38:34.855 "reconnect_delay_sec": 0, 00:38:34.855 "fast_io_fail_timeout_sec": 0, 00:38:34.855 "psk": "key0", 00:38:34.855 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:34.855 "hdgst": false, 00:38:34.855 "ddgst": false, 00:38:34.855 "multipath": "multipath" 00:38:34.855 } 00:38:34.855 }, 00:38:34.855 { 00:38:34.855 "method": "bdev_nvme_set_hotplug", 00:38:34.855 "params": { 00:38:34.855 "period_us": 100000, 00:38:34.855 "enable": false 00:38:34.855 } 00:38:34.855 }, 00:38:34.855 { 00:38:34.855 "method": "bdev_wait_for_examine" 00:38:34.855 } 00:38:34.855 ] 00:38:34.855 }, 00:38:34.855 { 00:38:34.855 "subsystem": "nbd", 00:38:34.855 "config": [] 00:38:34.855 } 00:38:34.855 ] 00:38:34.855 }' 00:38:34.855 18:34:22 keyring_file -- keyring/file.sh@115 -- # killprocess 802748 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 802748 ']' 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@958 -- # kill -0 802748 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 802748 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 802748' 00:38:34.855 killing process with pid 802748 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@973 -- # kill 802748 00:38:34.855 Received shutdown signal, test time was about 1.000000 seconds 00:38:34.855 00:38:34.855 Latency(us) 00:38:34.855 [2024-11-26T17:34:22.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.855 [2024-11-26T17:34:22.866Z] =================================================================================================================== 00:38:34.855 [2024-11-26T17:34:22.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:34.855 18:34:22 keyring_file -- common/autotest_common.sh@978 -- # wait 802748 00:38:35.114 18:34:22 keyring_file -- keyring/file.sh@118 -- # bperfpid=804235 00:38:35.114 18:34:22 keyring_file -- keyring/file.sh@120 -- # waitforlisten 804235 /var/tmp/bperf.sock 00:38:35.114 18:34:22 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 804235 ']' 00:38:35.114 18:34:22 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:35.114 18:34:22 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:35.114 18:34:22 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:35.114 18:34:22 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:35.114 "subsystems": [ 00:38:35.114 { 00:38:35.114 "subsystem": "keyring", 00:38:35.114 "config": [ 00:38:35.114 { 00:38:35.114 "method": "keyring_file_add_key", 00:38:35.114 "params": { 00:38:35.114 "name": "key0", 00:38:35.114 "path": "/tmp/tmp.3d4aJ9btC2" 00:38:35.114 } 00:38:35.114 }, 00:38:35.114 { 00:38:35.114 "method": "keyring_file_add_key", 00:38:35.114 "params": { 00:38:35.114 "name": "key1", 00:38:35.114 "path": "/tmp/tmp.lJNFk1sfrY" 00:38:35.114 } 00:38:35.114 } 00:38:35.114 ] 00:38:35.114 }, 00:38:35.114 { 00:38:35.114 "subsystem": "iobuf", 00:38:35.114 "config": [ 00:38:35.114 { 00:38:35.114 "method": "iobuf_set_options", 00:38:35.114 "params": { 00:38:35.114 "small_pool_count": 8192, 00:38:35.114 "large_pool_count": 1024, 00:38:35.114 "small_bufsize": 8192, 00:38:35.114 "large_bufsize": 135168, 00:38:35.114 "enable_numa": false 00:38:35.114 } 00:38:35.114 } 00:38:35.114 ] 00:38:35.114 }, 00:38:35.114 { 00:38:35.114 "subsystem": "sock", 00:38:35.114 "config": [ 00:38:35.114 { 00:38:35.114 "method": "sock_set_default_impl", 00:38:35.114 "params": { 00:38:35.114 "impl_name": "posix" 00:38:35.114 } 00:38:35.114 }, 00:38:35.114 { 00:38:35.114 "method": "sock_impl_set_options", 00:38:35.114 "params": { 00:38:35.114 "impl_name": "ssl", 00:38:35.114 "recv_buf_size": 4096, 00:38:35.114 "send_buf_size": 4096, 00:38:35.114 "enable_recv_pipe": true, 00:38:35.114 "enable_quickack": false, 00:38:35.114 "enable_placement_id": 0, 00:38:35.114 "enable_zerocopy_send_server": true, 00:38:35.114 "enable_zerocopy_send_client": false, 00:38:35.114 "zerocopy_threshold": 0, 00:38:35.114 "tls_version": 0, 00:38:35.114 "enable_ktls": false 00:38:35.114 } 00:38:35.114 }, 00:38:35.114 { 00:38:35.114 "method": "sock_impl_set_options", 00:38:35.114 "params": { 00:38:35.114 "impl_name": "posix", 00:38:35.114 "recv_buf_size": 2097152, 00:38:35.114 "send_buf_size": 2097152, 00:38:35.114 "enable_recv_pipe": true, 00:38:35.114 "enable_quickack": false, 00:38:35.114 "enable_placement_id": 0, 00:38:35.114 "enable_zerocopy_send_server": true, 00:38:35.114 "enable_zerocopy_send_client": false, 00:38:35.114 "zerocopy_threshold": 0, 00:38:35.114 "tls_version": 0, 00:38:35.114 "enable_ktls": false 00:38:35.114 } 00:38:35.114 } 00:38:35.114 ] 00:38:35.114 }, 00:38:35.114 { 00:38:35.114 "subsystem": "vmd", 00:38:35.114 "config": [] 00:38:35.114 }, 00:38:35.114 { 00:38:35.114 "subsystem": "accel", 00:38:35.114 "config": [ 00:38:35.114 { 00:38:35.114 "method": "accel_set_options", 00:38:35.115 "params": { 00:38:35.115 "small_cache_size": 128, 00:38:35.115 "large_cache_size": 16, 00:38:35.115 "task_count": 2048, 00:38:35.115 "sequence_count": 2048, 00:38:35.115 "buf_count": 2048 00:38:35.115 } 00:38:35.115 } 00:38:35.115 ] 00:38:35.115 }, 00:38:35.115 { 00:38:35.115 "subsystem": "bdev", 00:38:35.115 "config": [ 00:38:35.115 { 00:38:35.115 "method": "bdev_set_options", 00:38:35.115 "params": { 00:38:35.115 "bdev_io_pool_size": 65535, 00:38:35.115 "bdev_io_cache_size": 256, 00:38:35.115 "bdev_auto_examine": true, 00:38:35.115 "iobuf_small_cache_size": 128, 00:38:35.115 "iobuf_large_cache_size": 16 00:38:35.115 } 00:38:35.115 }, 00:38:35.115 { 00:38:35.115 "method": "bdev_raid_set_options", 00:38:35.115 "params": { 00:38:35.115 "process_window_size_kb": 1024, 00:38:35.115 "process_max_bandwidth_mb_sec": 0 00:38:35.115 } 00:38:35.115 }, 00:38:35.115 { 00:38:35.115 "method": "bdev_iscsi_set_options", 00:38:35.115 "params": { 00:38:35.115 "timeout_sec": 30 00:38:35.115 } 00:38:35.115 }, 00:38:35.115 { 00:38:35.115 "method": "bdev_nvme_set_options", 00:38:35.115 "params": { 00:38:35.115 "action_on_timeout": "none", 00:38:35.115 "timeout_us": 0, 00:38:35.115 "timeout_admin_us": 0, 00:38:35.115 "keep_alive_timeout_ms": 10000, 00:38:35.115 "arbitration_burst": 0, 00:38:35.115 "low_priority_weight": 0, 00:38:35.115 "medium_priority_weight": 0, 00:38:35.115 "high_priority_weight": 0, 00:38:35.115 "nvme_adminq_poll_period_us": 10000, 00:38:35.115 "nvme_ioq_poll_period_us": 0, 00:38:35.115 "io_queue_requests": 512, 00:38:35.115 "delay_cmd_submit": true, 00:38:35.115 "transport_retry_count": 4, 00:38:35.115 "bdev_retry_count": 3, 00:38:35.115 "transport_ack_timeout": 0, 00:38:35.115 "ctrlr_loss_timeout_sec": 0, 00:38:35.115 "reconnect_delay_sec": 0, 00:38:35.115 "fast_io_fail_timeout_sec": 0, 00:38:35.115 "disable_auto_failback": false, 00:38:35.115 "generate_uuids": false, 00:38:35.115 "transport_tos": 0, 00:38:35.115 "nvme_error_stat": false, 00:38:35.115 "rdma_srq_size": 0, 00:38:35.115 18:34:22 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:35.115 "io_path_stat": false, 00:38:35.115 "allow_accel_sequence": false, 00:38:35.115 "rdma_max_cq_size": 0, 00:38:35.115 "rdma_cm_event_timeout_ms": 0, 00:38:35.115 "dhchap_digests": [ 00:38:35.115 "sha256", 00:38:35.115 "sha384", 00:38:35.115 "sha512" 00:38:35.115 ], 00:38:35.115 "dhchap_dhgroups": [ 00:38:35.115 "null", 00:38:35.115 "ffdhe2048", 00:38:35.115 "ffdhe3072", 00:38:35.115 "ffdhe4096", 00:38:35.115 "ffdhe6144", 00:38:35.115 "ffdhe8192" 00:38:35.115 ] 00:38:35.115 } 00:38:35.115 }, 00:38:35.115 { 00:38:35.115 "method": "bdev_nvme_attach_controller", 00:38:35.115 "params": { 00:38:35.115 "name": "nvme0", 00:38:35.115 "trtype": "TCP", 00:38:35.115 "adrfam": "IPv4", 00:38:35.115 "traddr": "127.0.0.1", 00:38:35.115 "trsvcid": "4420", 00:38:35.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.115 "prchk_reftag": false, 00:38:35.115 "prchk_guard": false, 00:38:35.115 "ctrlr_loss_timeout_sec": 0, 00:38:35.115 "reconnect_delay_sec": 0, 00:38:35.115 "fast_io_fail_timeout_sec": 0, 00:38:35.115 "psk": "key0", 00:38:35.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.115 "hdgst": false, 00:38:35.115 "ddgst": false, 00:38:35.115 "multipath": "multipath" 00:38:35.115 } 00:38:35.115 }, 00:38:35.115 { 00:38:35.115 "method": "bdev_nvme_set_hotplug", 00:38:35.115 "params": { 00:38:35.115 "period_us": 100000, 00:38:35.115 "enable": false 00:38:35.115 } 00:38:35.115 }, 00:38:35.115 { 00:38:35.115 "method": "bdev_wait_for_examine" 00:38:35.115 } 00:38:35.115 ] 00:38:35.115 }, 00:38:35.115 { 00:38:35.115 "subsystem": "nbd", 00:38:35.115 "config": [] 00:38:35.115 } 00:38:35.115 ] 00:38:35.115 }' 00:38:35.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:35.115 18:34:22 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:35.115 18:34:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:35.115 [2024-11-26 18:34:23.011663] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:38:35.115 [2024-11-26 18:34:23.011751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804235 ] 00:38:35.115 [2024-11-26 18:34:23.082549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.374 [2024-11-26 18:34:23.142385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:35.374 [2024-11-26 18:34:23.338012] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:35.632 18:34:23 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:35.632 18:34:23 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:35.632 18:34:23 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:35.632 18:34:23 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:35.632 18:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.890 18:34:23 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:35.890 18:34:23 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:35.890 18:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:35.890 18:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:35.890 18:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:35.890 18:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.890 18:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:36.149 18:34:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:36.149 18:34:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:36.149 18:34:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:36.149 18:34:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:36.149 18:34:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:36.149 18:34:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:36.149 18:34:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:36.407 18:34:24 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:36.407 18:34:24 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:36.407 18:34:24 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:36.407 18:34:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:36.666 18:34:24 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:36.666 18:34:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:36.666 18:34:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.3d4aJ9btC2 /tmp/tmp.lJNFk1sfrY 00:38:36.666 18:34:24 keyring_file -- keyring/file.sh@20 -- # killprocess 804235 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 804235 ']' 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 804235 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 804235 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 804235' 00:38:36.666 killing process with pid 804235 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@973 -- # kill 804235 00:38:36.666 Received shutdown signal, test time was about 1.000000 seconds 00:38:36.666 00:38:36.666 Latency(us) 00:38:36.666 [2024-11-26T17:34:24.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.666 [2024-11-26T17:34:24.677Z] =================================================================================================================== 00:38:36.666 [2024-11-26T17:34:24.677Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:36.666 18:34:24 keyring_file -- common/autotest_common.sh@978 -- # wait 804235 00:38:36.924 18:34:24 keyring_file -- keyring/file.sh@21 -- # killprocess 802733 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 802733 ']' 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 802733 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 802733 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 802733' 00:38:36.924 killing process with pid 802733 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@973 -- # kill 802733 00:38:36.924 18:34:24 keyring_file -- common/autotest_common.sh@978 -- # wait 802733 00:38:37.492 00:38:37.492 real 0m14.582s 00:38:37.492 user 0m37.153s 00:38:37.492 sys 0m3.194s 00:38:37.492 18:34:25 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:37.492 18:34:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:37.492 ************************************ 00:38:37.492 END TEST keyring_file 00:38:37.492 ************************************ 00:38:37.492 18:34:25 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:37.492 18:34:25 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:37.492 18:34:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:37.492 18:34:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:37.492 18:34:25 -- common/autotest_common.sh@10 -- # set +x 00:38:37.493 ************************************ 00:38:37.493 START TEST keyring_linux 00:38:37.493 ************************************ 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:37.493 Joined session keyring: 708682356 00:38:37.493 * Looking for test storage... 00:38:37.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:37.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.493 --rc genhtml_branch_coverage=1 00:38:37.493 --rc genhtml_function_coverage=1 00:38:37.493 --rc genhtml_legend=1 00:38:37.493 --rc geninfo_all_blocks=1 00:38:37.493 --rc geninfo_unexecuted_blocks=1 00:38:37.493 00:38:37.493 ' 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:37.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.493 --rc genhtml_branch_coverage=1 00:38:37.493 --rc genhtml_function_coverage=1 00:38:37.493 --rc genhtml_legend=1 00:38:37.493 --rc geninfo_all_blocks=1 00:38:37.493 --rc geninfo_unexecuted_blocks=1 00:38:37.493 00:38:37.493 ' 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:37.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.493 --rc genhtml_branch_coverage=1 00:38:37.493 --rc genhtml_function_coverage=1 00:38:37.493 --rc genhtml_legend=1 00:38:37.493 --rc geninfo_all_blocks=1 00:38:37.493 --rc geninfo_unexecuted_blocks=1 00:38:37.493 00:38:37.493 ' 00:38:37.493 18:34:25 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:37.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:37.493 --rc genhtml_branch_coverage=1 00:38:37.493 --rc genhtml_function_coverage=1 00:38:37.493 --rc genhtml_legend=1 00:38:37.493 --rc geninfo_all_blocks=1 00:38:37.493 --rc geninfo_unexecuted_blocks=1 00:38:37.493 00:38:37.493 ' 00:38:37.493 18:34:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:37.493 18:34:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:37.493 18:34:25 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:37.493 18:34:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.493 18:34:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.493 18:34:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.493 18:34:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:37.493 18:34:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:37.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:37.493 18:34:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:37.493 18:34:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:37.493 18:34:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:37.493 18:34:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:37.493 18:34:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:37.493 18:34:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:37.493 18:34:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:37.493 18:34:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:37.493 18:34:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:37.493 18:34:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:37.493 18:34:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:37.493 18:34:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:37.493 18:34:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:37.493 18:34:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:37.753 /tmp/:spdk-test:key0 00:38:37.753 18:34:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:37.753 18:34:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:37.753 18:34:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:37.753 18:34:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:37.753 18:34:25 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:37.753 18:34:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:37.753 18:34:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:37.753 18:34:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:37.753 /tmp/:spdk-test:key1 00:38:37.753 18:34:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=804696 00:38:37.753 18:34:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:37.753 18:34:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 804696 00:38:37.753 18:34:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 804696 ']' 00:38:37.753 18:34:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:37.753 18:34:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:37.753 18:34:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:37.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:37.753 18:34:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:37.753 18:34:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:37.753 [2024-11-26 18:34:25.609415] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:38:37.753 [2024-11-26 18:34:25.609493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804696 ] 00:38:37.753 [2024-11-26 18:34:25.672365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.753 [2024-11-26 18:34:25.728485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.012 18:34:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:38.012 18:34:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:38.012 18:34:25 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:38.012 18:34:25 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:38.012 18:34:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:38.012 [2024-11-26 18:34:25.974527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:38.012 null0 00:38:38.012 [2024-11-26 18:34:26.006601] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:38.012 [2024-11-26 18:34:26.007030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:38.270 18:34:26 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:38.270 18:34:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:38.270 509969787 00:38:38.270 18:34:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:38.270 327231158 00:38:38.270 18:34:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=804703 00:38:38.270 18:34:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:38.270 18:34:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 804703 /var/tmp/bperf.sock 00:38:38.270 18:34:26 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 804703 ']' 00:38:38.270 18:34:26 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:38.270 18:34:26 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:38.270 18:34:26 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:38.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:38.270 18:34:26 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:38.270 18:34:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:38.270 [2024-11-26 18:34:26.073600] Starting SPDK v25.01-pre git sha1 3c5c3d590 / DPDK 24.03.0 initialization... 00:38:38.270 [2024-11-26 18:34:26.073662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804703 ] 00:38:38.270 [2024-11-26 18:34:26.137674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.270 [2024-11-26 18:34:26.194716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:38.528 18:34:26 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:38.528 18:34:26 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:38.528 18:34:26 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:38.528 18:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:38.786 18:34:26 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:38.786 18:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:39.043 18:34:26 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:39.043 18:34:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:39.301 [2024-11-26 18:34:27.188242] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:39.301 nvme0n1 00:38:39.301 18:34:27 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:39.301 18:34:27 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:39.301 18:34:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:39.301 18:34:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:39.301 18:34:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.301 18:34:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:39.559 18:34:27 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:39.559 18:34:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:39.559 18:34:27 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:39.559 18:34:27 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:39.559 18:34:27 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:39.559 18:34:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:39.559 18:34:27 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:40.126 18:34:27 keyring_linux -- keyring/linux.sh@25 -- # sn=509969787 00:38:40.126 18:34:27 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:40.126 18:34:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:40.126 18:34:27 keyring_linux -- keyring/linux.sh@26 -- # [[ 509969787 == \5\0\9\9\6\9\7\8\7 ]] 00:38:40.126 18:34:27 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 509969787 00:38:40.126 18:34:27 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:40.126 18:34:27 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:40.126 Running I/O for 1 seconds... 00:38:41.091 11345.00 IOPS, 44.32 MiB/s 00:38:41.091 Latency(us) 00:38:41.091 [2024-11-26T17:34:29.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.091 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:41.091 nvme0n1 : 1.01 11356.27 44.36 0.00 0.00 11205.85 6262.33 17379.18 00:38:41.091 [2024-11-26T17:34:29.102Z] =================================================================================================================== 00:38:41.091 [2024-11-26T17:34:29.102Z] Total : 11356.27 44.36 0.00 0.00 11205.85 6262.33 17379.18 00:38:41.091 { 00:38:41.091 "results": [ 00:38:41.091 { 00:38:41.091 "job": "nvme0n1", 00:38:41.091 "core_mask": "0x2", 00:38:41.091 "workload": "randread", 00:38:41.091 "status": "finished", 00:38:41.091 "queue_depth": 128, 00:38:41.091 "io_size": 4096, 00:38:41.091 "runtime": 1.010367, 00:38:41.091 "iops": 11356.269553538466, 00:38:41.091 "mibps": 44.360427943509634, 00:38:41.091 "io_failed": 0, 00:38:41.091 "io_timeout": 0, 00:38:41.091 "avg_latency_us": 11205.854670204457, 00:38:41.091 "min_latency_us": 6262.328888888889, 00:38:41.091 "max_latency_us": 17379.176296296297 00:38:41.091 } 00:38:41.091 ], 00:38:41.091 "core_count": 1 00:38:41.091 } 00:38:41.091 18:34:28 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:41.091 18:34:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:41.348 18:34:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:41.348 18:34:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:41.348 18:34:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:41.348 18:34:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:41.348 18:34:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:41.348 18:34:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:41.606 18:34:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:41.606 18:34:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:41.606 18:34:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:41.606 18:34:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:41.606 18:34:29 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:41.606 18:34:29 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:41.606 18:34:29 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:41.606 18:34:29 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.606 18:34:29 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:41.606 18:34:29 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:41.606 18:34:29 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:41.606 18:34:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:41.864 [2024-11-26 18:34:29.770904] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:41.864 [2024-11-26 18:34:29.771142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6b2e0 (107): Transport endpoint is not connected 00:38:41.864 [2024-11-26 18:34:29.772127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6b2e0 (9): Bad file descriptor 00:38:41.864 [2024-11-26 18:34:29.773126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:41.864 [2024-11-26 18:34:29.773175] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:41.864 [2024-11-26 18:34:29.773197] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:41.864 [2024-11-26 18:34:29.773233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:41.864 request: 00:38:41.864 { 00:38:41.864 "name": "nvme0", 00:38:41.864 "trtype": "tcp", 00:38:41.864 "traddr": "127.0.0.1", 00:38:41.864 "adrfam": "ipv4", 00:38:41.864 "trsvcid": "4420", 00:38:41.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:41.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:41.864 "prchk_reftag": false, 00:38:41.864 "prchk_guard": false, 00:38:41.864 "hdgst": false, 00:38:41.864 "ddgst": false, 00:38:41.864 "psk": ":spdk-test:key1", 00:38:41.864 "allow_unrecognized_csi": false, 00:38:41.864 "method": "bdev_nvme_attach_controller", 00:38:41.864 "req_id": 1 00:38:41.864 } 00:38:41.864 Got JSON-RPC error response 00:38:41.864 response: 00:38:41.864 { 00:38:41.864 "code": -5, 00:38:41.864 "message": "Input/output error" 00:38:41.864 } 00:38:41.864 18:34:29 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:41.864 18:34:29 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:41.864 18:34:29 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:41.864 18:34:29 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@33 -- # sn=509969787 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 509969787 00:38:41.864 1 links removed 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:41.864 18:34:29 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:41.865 18:34:29 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:41.865 18:34:29 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:41.865 18:34:29 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:41.865 18:34:29 keyring_linux -- keyring/linux.sh@33 -- # sn=327231158 00:38:41.865 18:34:29 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 327231158 00:38:41.865 1 links removed 00:38:41.865 18:34:29 keyring_linux -- keyring/linux.sh@41 -- # killprocess 804703 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 804703 ']' 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 804703 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 804703 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 804703' 00:38:41.865 killing process with pid 804703 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@973 -- # kill 804703 00:38:41.865 Received shutdown signal, test time was about 1.000000 seconds 00:38:41.865 00:38:41.865 Latency(us) 00:38:41.865 [2024-11-26T17:34:29.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.865 [2024-11-26T17:34:29.876Z] =================================================================================================================== 00:38:41.865 [2024-11-26T17:34:29.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:41.865 18:34:29 keyring_linux -- common/autotest_common.sh@978 -- # wait 804703 00:38:42.123 18:34:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 804696 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 804696 ']' 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 804696 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 804696 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 804696' 00:38:42.123 killing process with pid 804696 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@973 -- # kill 804696 00:38:42.123 18:34:30 keyring_linux -- common/autotest_common.sh@978 -- # wait 804696 00:38:42.689 00:38:42.689 real 0m5.173s 00:38:42.689 user 0m10.361s 00:38:42.689 sys 0m1.529s 00:38:42.689 18:34:30 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:42.689 18:34:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:42.689 ************************************ 00:38:42.689 END TEST keyring_linux 00:38:42.689 ************************************ 00:38:42.689 18:34:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:42.689 18:34:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:42.689 18:34:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:42.689 18:34:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:42.689 18:34:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:42.689 18:34:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:42.689 18:34:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:42.689 18:34:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:42.690 18:34:30 -- common/autotest_common.sh@10 -- # set +x 00:38:42.690 18:34:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:42.690 18:34:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:42.690 18:34:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:42.690 18:34:30 -- common/autotest_common.sh@10 -- # set +x 00:38:44.589 INFO: APP EXITING 00:38:44.589 INFO: killing all VMs 00:38:44.589 INFO: killing vhost app 00:38:44.589 INFO: EXIT DONE 00:38:45.964 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:38:45.964 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:38:45.964 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:38:45.964 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:38:45.964 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:38:45.964 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:38:45.964 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:38:45.964 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:38:45.964 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:38:45.964 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:38:45.964 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:38:45.964 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:38:45.964 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:38:45.964 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:38:45.964 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:38:45.964 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:38:45.964 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:38:47.341 Cleaning 00:38:47.341 Removing: /var/run/dpdk/spdk0/config 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:47.341 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:47.341 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:47.341 Removing: /var/run/dpdk/spdk1/config 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:47.341 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:47.341 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:47.341 Removing: /var/run/dpdk/spdk2/config 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:47.341 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:47.341 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:47.341 Removing: /var/run/dpdk/spdk3/config 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:47.341 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:47.341 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:47.341 Removing: /var/run/dpdk/spdk4/config 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:47.341 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:47.341 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:47.341 Removing: /dev/shm/bdev_svc_trace.1 00:38:47.341 Removing: /dev/shm/nvmf_trace.0 00:38:47.341 Removing: /dev/shm/spdk_tgt_trace.pid482671 00:38:47.341 Removing: /var/run/dpdk/spdk0 00:38:47.341 Removing: /var/run/dpdk/spdk1 00:38:47.341 Removing: /var/run/dpdk/spdk2 00:38:47.341 Removing: /var/run/dpdk/spdk3 00:38:47.341 Removing: /var/run/dpdk/spdk4 00:38:47.341 Removing: /var/run/dpdk/spdk_pid480998 00:38:47.341 Removing: /var/run/dpdk/spdk_pid481741 00:38:47.341 Removing: /var/run/dpdk/spdk_pid482671 00:38:47.341 Removing: /var/run/dpdk/spdk_pid483018 00:38:47.342 Removing: /var/run/dpdk/spdk_pid483704 00:38:47.342 Removing: /var/run/dpdk/spdk_pid483844 00:38:47.342 Removing: /var/run/dpdk/spdk_pid484562 00:38:47.342 Removing: /var/run/dpdk/spdk_pid484687 00:38:47.342 Removing: /var/run/dpdk/spdk_pid484953 00:38:47.342 Removing: /var/run/dpdk/spdk_pid486163 00:38:47.342 Removing: /var/run/dpdk/spdk_pid487077 00:38:47.342 Removing: /var/run/dpdk/spdk_pid487387 00:38:47.342 Removing: /var/run/dpdk/spdk_pid487591 00:38:47.342 Removing: /var/run/dpdk/spdk_pid487880 00:38:47.342 Removing: /var/run/dpdk/spdk_pid488199 00:38:47.601 Removing: /var/run/dpdk/spdk_pid488391 00:38:47.601 Removing: /var/run/dpdk/spdk_pid488545 00:38:47.601 Removing: /var/run/dpdk/spdk_pid488731 00:38:47.601 Removing: /var/run/dpdk/spdk_pid489295 00:38:47.601 Removing: /var/run/dpdk/spdk_pid492040 00:38:47.601 Removing: /var/run/dpdk/spdk_pid492209 00:38:47.601 Removing: /var/run/dpdk/spdk_pid492370 00:38:47.601 Removing: /var/run/dpdk/spdk_pid492384 00:38:47.601 Removing: /var/run/dpdk/spdk_pid492797 00:38:47.601 Removing: /var/run/dpdk/spdk_pid492819 00:38:47.601 Removing: /var/run/dpdk/spdk_pid493199 00:38:47.601 Removing: /var/run/dpdk/spdk_pid493253 00:38:47.601 Removing: /var/run/dpdk/spdk_pid493423 00:38:47.601 Removing: /var/run/dpdk/spdk_pid493548 00:38:47.601 Removing: /var/run/dpdk/spdk_pid493720 00:38:47.601 Removing: /var/run/dpdk/spdk_pid493728 00:38:47.601 Removing: /var/run/dpdk/spdk_pid494225 00:38:47.601 Removing: /var/run/dpdk/spdk_pid494379 00:38:47.601 Removing: /var/run/dpdk/spdk_pid494583 00:38:47.601 Removing: /var/run/dpdk/spdk_pid496814 00:38:47.601 Removing: /var/run/dpdk/spdk_pid499454 00:38:47.601 Removing: /var/run/dpdk/spdk_pid506459 00:38:47.601 Removing: /var/run/dpdk/spdk_pid506867 00:38:47.601 Removing: /var/run/dpdk/spdk_pid509395 00:38:47.601 Removing: /var/run/dpdk/spdk_pid509676 00:38:47.601 Removing: /var/run/dpdk/spdk_pid512195 00:38:47.601 Removing: /var/run/dpdk/spdk_pid516048 00:38:47.601 Removing: /var/run/dpdk/spdk_pid518115 00:38:47.601 Removing: /var/run/dpdk/spdk_pid525155 00:38:47.601 Removing: /var/run/dpdk/spdk_pid530529 00:38:47.601 Removing: /var/run/dpdk/spdk_pid531728 00:38:47.601 Removing: /var/run/dpdk/spdk_pid532400 00:38:47.601 Removing: /var/run/dpdk/spdk_pid542781 00:38:47.601 Removing: /var/run/dpdk/spdk_pid545084 00:38:47.601 Removing: /var/run/dpdk/spdk_pid573192 00:38:47.601 Removing: /var/run/dpdk/spdk_pid576401 00:38:47.601 Removing: /var/run/dpdk/spdk_pid580230 00:38:47.601 Removing: /var/run/dpdk/spdk_pid584509 00:38:47.601 Removing: /var/run/dpdk/spdk_pid584626 00:38:47.601 Removing: /var/run/dpdk/spdk_pid585168 00:38:47.601 Removing: /var/run/dpdk/spdk_pid585830 00:38:47.601 Removing: /var/run/dpdk/spdk_pid586468 00:38:47.601 Removing: /var/run/dpdk/spdk_pid586823 00:38:47.601 Removing: /var/run/dpdk/spdk_pid586886 00:38:47.601 Removing: /var/run/dpdk/spdk_pid587031 00:38:47.601 Removing: /var/run/dpdk/spdk_pid587169 00:38:47.601 Removing: /var/run/dpdk/spdk_pid587172 00:38:47.601 Removing: /var/run/dpdk/spdk_pid587824 00:38:47.601 Removing: /var/run/dpdk/spdk_pid588485 00:38:47.601 Removing: /var/run/dpdk/spdk_pid589030 00:38:47.601 Removing: /var/run/dpdk/spdk_pid589426 00:38:47.601 Removing: /var/run/dpdk/spdk_pid589549 00:38:47.601 Removing: /var/run/dpdk/spdk_pid589703 00:38:47.601 Removing: /var/run/dpdk/spdk_pid590678 00:38:47.601 Removing: /var/run/dpdk/spdk_pid591448 00:38:47.601 Removing: /var/run/dpdk/spdk_pid597407 00:38:47.601 Removing: /var/run/dpdk/spdk_pid625328 00:38:47.601 Removing: /var/run/dpdk/spdk_pid628247 00:38:47.601 Removing: /var/run/dpdk/spdk_pid629423 00:38:47.601 Removing: /var/run/dpdk/spdk_pid630765 00:38:47.601 Removing: /var/run/dpdk/spdk_pid630905 00:38:47.601 Removing: /var/run/dpdk/spdk_pid631055 00:38:47.601 Removing: /var/run/dpdk/spdk_pid631192 00:38:47.601 Removing: /var/run/dpdk/spdk_pid631628 00:38:47.601 Removing: /var/run/dpdk/spdk_pid632969 00:38:47.601 Removing: /var/run/dpdk/spdk_pid633829 00:38:47.601 Removing: /var/run/dpdk/spdk_pid634258 00:38:47.601 Removing: /var/run/dpdk/spdk_pid635759 00:38:47.601 Removing: /var/run/dpdk/spdk_pid636181 00:38:47.601 Removing: /var/run/dpdk/spdk_pid636737 00:38:47.601 Removing: /var/run/dpdk/spdk_pid639130 00:38:47.601 Removing: /var/run/dpdk/spdk_pid642422 00:38:47.601 Removing: /var/run/dpdk/spdk_pid642423 00:38:47.601 Removing: /var/run/dpdk/spdk_pid642424 00:38:47.601 Removing: /var/run/dpdk/spdk_pid644646 00:38:47.601 Removing: /var/run/dpdk/spdk_pid649602 00:38:47.601 Removing: /var/run/dpdk/spdk_pid652755 00:38:47.601 Removing: /var/run/dpdk/spdk_pid656665 00:38:47.601 Removing: /var/run/dpdk/spdk_pid657611 00:38:47.601 Removing: /var/run/dpdk/spdk_pid658696 00:38:47.601 Removing: /var/run/dpdk/spdk_pid659673 00:38:47.601 Removing: /var/run/dpdk/spdk_pid662433 00:38:47.601 Removing: /var/run/dpdk/spdk_pid665028 00:38:47.601 Removing: /var/run/dpdk/spdk_pid667382 00:38:47.601 Removing: /var/run/dpdk/spdk_pid671625 00:38:47.601 Removing: /var/run/dpdk/spdk_pid671632 00:38:47.601 Removing: /var/run/dpdk/spdk_pid674534 00:38:47.601 Removing: /var/run/dpdk/spdk_pid674666 00:38:47.601 Removing: /var/run/dpdk/spdk_pid674802 00:38:47.601 Removing: /var/run/dpdk/spdk_pid675077 00:38:47.601 Removing: /var/run/dpdk/spdk_pid675186 00:38:47.601 Removing: /var/run/dpdk/spdk_pid677846 00:38:47.601 Removing: /var/run/dpdk/spdk_pid678237 00:38:47.601 Removing: /var/run/dpdk/spdk_pid680968 00:38:47.601 Removing: /var/run/dpdk/spdk_pid682937 00:38:47.601 Removing: /var/run/dpdk/spdk_pid686478 00:38:47.601 Removing: /var/run/dpdk/spdk_pid690447 00:38:47.601 Removing: /var/run/dpdk/spdk_pid696962 00:38:47.601 Removing: /var/run/dpdk/spdk_pid701317 00:38:47.601 Removing: /var/run/dpdk/spdk_pid701319 00:38:47.601 Removing: /var/run/dpdk/spdk_pid713711 00:38:47.601 Removing: /var/run/dpdk/spdk_pid714237 00:38:47.601 Removing: /var/run/dpdk/spdk_pid714637 00:38:47.601 Removing: /var/run/dpdk/spdk_pid715049 00:38:47.601 Removing: /var/run/dpdk/spdk_pid715632 00:38:47.601 Removing: /var/run/dpdk/spdk_pid716157 00:38:47.601 Removing: /var/run/dpdk/spdk_pid716573 00:38:47.601 Removing: /var/run/dpdk/spdk_pid716977 00:38:47.601 Removing: /var/run/dpdk/spdk_pid719486 00:38:47.601 Removing: /var/run/dpdk/spdk_pid719671 00:38:47.601 Removing: /var/run/dpdk/spdk_pid724163 00:38:47.601 Removing: /var/run/dpdk/spdk_pid724226 00:38:47.602 Removing: /var/run/dpdk/spdk_pid727597 00:38:47.602 Removing: /var/run/dpdk/spdk_pid730205 00:38:47.602 Removing: /var/run/dpdk/spdk_pid737132 00:38:47.602 Removing: /var/run/dpdk/spdk_pid737634 00:38:47.602 Removing: /var/run/dpdk/spdk_pid740042 00:38:47.602 Removing: /var/run/dpdk/spdk_pid740312 00:38:47.602 Removing: /var/run/dpdk/spdk_pid742889 00:38:47.602 Removing: /var/run/dpdk/spdk_pid746633 00:38:47.602 Removing: /var/run/dpdk/spdk_pid748798 00:38:47.602 Removing: /var/run/dpdk/spdk_pid755293 00:38:47.602 Removing: /var/run/dpdk/spdk_pid761011 00:38:47.602 Removing: /var/run/dpdk/spdk_pid762196 00:38:47.602 Removing: /var/run/dpdk/spdk_pid762859 00:38:47.861 Removing: /var/run/dpdk/spdk_pid773048 00:38:47.861 Removing: /var/run/dpdk/spdk_pid775302 00:38:47.861 Removing: /var/run/dpdk/spdk_pid777314 00:38:47.861 Removing: /var/run/dpdk/spdk_pid782381 00:38:47.861 Removing: /var/run/dpdk/spdk_pid782494 00:38:47.861 Removing: /var/run/dpdk/spdk_pid785397 00:38:47.861 Removing: /var/run/dpdk/spdk_pid786799 00:38:47.861 Removing: /var/run/dpdk/spdk_pid788200 00:38:47.861 Removing: /var/run/dpdk/spdk_pid789060 00:38:47.861 Removing: /var/run/dpdk/spdk_pid791080 00:38:47.861 Removing: /var/run/dpdk/spdk_pid791972 00:38:47.861 Removing: /var/run/dpdk/spdk_pid797301 00:38:47.861 Removing: /var/run/dpdk/spdk_pid797649 00:38:47.861 Removing: /var/run/dpdk/spdk_pid798041 00:38:47.861 Removing: /var/run/dpdk/spdk_pid799601 00:38:47.861 Removing: /var/run/dpdk/spdk_pid799996 00:38:47.861 Removing: /var/run/dpdk/spdk_pid800278 00:38:47.861 Removing: /var/run/dpdk/spdk_pid802733 00:38:47.861 Removing: /var/run/dpdk/spdk_pid802748 00:38:47.861 Removing: /var/run/dpdk/spdk_pid804235 00:38:47.861 Removing: /var/run/dpdk/spdk_pid804696 00:38:47.861 Removing: /var/run/dpdk/spdk_pid804703 00:38:47.861 Clean 00:38:47.861 18:34:35 -- common/autotest_common.sh@1453 -- # return 0 00:38:47.861 18:34:35 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:47.861 18:34:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.861 18:34:35 -- common/autotest_common.sh@10 -- # set +x 00:38:47.861 18:34:35 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:47.861 18:34:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.861 18:34:35 -- common/autotest_common.sh@10 -- # set +x 00:38:47.861 18:34:35 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:47.861 18:34:35 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:47.861 18:34:35 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:47.861 18:34:35 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:47.861 18:34:35 -- spdk/autotest.sh@398 -- # hostname 00:38:47.861 18:34:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:48.119 geninfo: WARNING: invalid characters removed from testname! 00:39:20.191 18:35:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:23.486 18:35:11 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:26.775 18:35:14 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:30.073 18:35:17 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:32.613 18:35:20 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:35.969 18:35:23 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:39.264 18:35:26 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:39.264 18:35:26 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:39.264 18:35:26 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:39.264 18:35:26 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:39.264 18:35:26 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:39.264 18:35:26 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:39.264 + [[ -n 410246 ]] 00:39:39.264 + sudo kill 410246 00:39:39.274 [Pipeline] } 00:39:39.319 [Pipeline] // stage 00:39:39.325 [Pipeline] } 00:39:39.338 [Pipeline] // timeout 00:39:39.343 [Pipeline] } 00:39:39.356 [Pipeline] // catchError 00:39:39.361 [Pipeline] } 00:39:39.376 [Pipeline] // wrap 00:39:39.382 [Pipeline] } 00:39:39.395 [Pipeline] // catchError 00:39:39.404 [Pipeline] stage 00:39:39.406 [Pipeline] { (Epilogue) 00:39:39.420 [Pipeline] catchError 00:39:39.422 [Pipeline] { 00:39:39.435 [Pipeline] echo 00:39:39.436 Cleanup processes 00:39:39.442 [Pipeline] sh 00:39:39.729 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:39.730 815433 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:39.746 [Pipeline] sh 00:39:40.035 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:40.035 ++ grep -v 'sudo pgrep' 00:39:40.035 ++ awk '{print $1}' 00:39:40.035 + sudo kill -9 00:39:40.035 + true 00:39:40.048 [Pipeline] sh 00:39:40.331 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:50.310 [Pipeline] sh 00:39:50.597 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:50.597 Artifacts sizes are good 00:39:50.613 [Pipeline] archiveArtifacts 00:39:50.621 Archiving artifacts 00:39:50.796 [Pipeline] sh 00:39:51.080 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:51.096 [Pipeline] cleanWs 00:39:51.106 [WS-CLEANUP] Deleting project workspace... 00:39:51.106 [WS-CLEANUP] Deferred wipeout is used... 00:39:51.114 [WS-CLEANUP] done 00:39:51.116 [Pipeline] } 00:39:51.135 [Pipeline] // catchError 00:39:51.149 [Pipeline] sh 00:39:51.429 + logger -p user.info -t JENKINS-CI 00:39:51.434 [Pipeline] } 00:39:51.444 [Pipeline] // stage 00:39:51.448 [Pipeline] } 00:39:51.457 [Pipeline] // node 00:39:51.461 [Pipeline] End of Pipeline 00:39:51.491 Finished: SUCCESS